IP Addressing¶
Every workload in RAIL exposed to the Internet with an ingress will be reachable with both the IPv4 and IPv6 protocols. The dualstack connectivity can not be disabled by the user.
Pod and service connectivity¶
By default pods will be equipped with dualStack connectivity, and both internal and external IPv6 connectivity is provided. If your pods try to connect to external services, the IP family your pods are using to connect to those services decides what your source prefix will be. The IPv4 source address depends on the NREC region of the RAIL cluster:
There are two IPv4 addresses per NREC region (BGO and OSL).
Look for netcfg_pub_natgw
and netcfg_pub_natgw2
in NREC:Git:himlar:bgo
NREC:Git:himlar:osl
BGO:
netcfg_pub_natgw: '158.39.77.248'
netcfg_pub_natgw2: '158.39.74.248'
OSL:
netcfg_pub_natgw: '158.37.63.248'
netcfg_pub_natgw2: '158.39.75.248'
The IPv6 source addresses depends on the specific RAIL cluster hosting your workload.
bgo1-prod: 2001:700:2:8303::/64
osl1-prod: 2001:700:2:8203::/64
bgo1-test: 2001:700:2:8302::/122
osl1-test: 2001:700:2:8202::/122
Services are equipped with singleStack addressing with IPv4 by default. This will not affect the external connectivity, but you can override this in your service manifest:
kind: Service
spec:
ipFamilies:
- IPv4
- IPv6
ipFamilyPolicy: PreferDualStack
Ingress connectivity¶
In each RAIL cluster, several worker nodes will announce a single IP address in each family through a technique called anycast. This provides workloads running on RAIL with both load balancing and high availability, without bottle necks and single points of failure. In the NREC network infrastructure IP routes to RAIL worker nodes with equal cost allows traffic to be evenly balanced based on source IP and source port.
Internal cluster connectivity¶
RAIL uses the calico network plugin (CNI) in non-BGP VXLAN mode. The plugin creates VXLAN tunnels between all nodes in the cluster, and routes traffic over these tunnels. Each worker is assigned IP pools in each family, and each pod gets addresses from these pools. Each pod is then connected to a virtual interface on the worker, and a host route in each IP family directs traffic to this virtual interface. Thus, each worker sees a collection of host routes to each pod running on that worker, as well as broader prefix routes directed to VXLAN endpoints on other worker nodes.