There’s a finite number of public IPv4 addresses and the IPv6 address space was specified to solve this problem some 20 years ago, long before Kubernetes was conceived of. But because it was originally developed inside Google and it’s only relatively recently that cloud services like Google and AWS have started to support IPv6 at all, Kubernetes started out with only IPv4 support.
That’s a problem for organizations that are already committed to using IPv6, perhaps for IoT devices where there are simply too many IP addresses required. “IoT customers have devices and edge devices deployed everywhere using IPv6,” notes Khaled (Kal) Henidak, Microsoft principal software engineer who works on container services for Azure and co-ordinates Microsoft’s upstream contributions to Kubernetes.
Carriers and telcos are also interested in adopting Kubernetes — AT&T is using Kubernetes as the basis of the Airship project it will use to run 5G and public safety network services — but they’ve already deployed significant amounts of IPv6, especially for mobile networks. Around 90 percent of T-Mobile USA and Verizon Wireless traffic already goes over IPv6; for Comcast and AT&T it’s about 70 percent.
As of 2017, three were only 38 million new IPv4 addresses available to be allocated by registrars worldwide (none of those are in the US, so anyone needing more IPv4 addresses has to find someone willing to sell ones they’re not using).
That means even enterprises who are slower to move off IPv4 because they can deal with the address shortage using technologies like NAT can will run into problems, Tim Hockin, principal software engineer at Google Cloud told the New Stack. “Kubernetes makes very liberal use of IP addresses (one IP per Pod), which simplifies the system and makes it easier to use and comprehend. For very large installations this can be difficult with IPv4. It’s not uncommon to find that larger enterprises have already “chopped up” the private IPv4 space within their network, so finding room for a Kubernetes cluster can be hard or impossible. IPv6 makes the IP space effectively infinite, so this is much less of a concern.”
“The only thing holding [IPv6 support] back from progressing to beta and GA is deeper integration with our automated tests, which is being driven by our developer community,” says Hockin.
The built-in bridge network in Kubernetes is implemented using the Linux iptables feature; “iptables has been IPv6-friendly for some time,” Henidak points out. If you’re using a Container Network Interface (CNI) stack for networking, plugins like Project Calico let you disable IPv4 and enable IPv6 for pods.
Moving to IPv6 clusters shouldn’t take a lot of work, but there are some things to be aware of, he notes. “Changing Kubernetes to use pure IPv6 should not be particularly difficult for users of Kubernetes (dev or ops), provided that your network infrastructure and applications are ready for IPv6. Every major operating system already supports IPv6, and most open source apps are fine to run in an IPv6 environment. Custom applications which do network-related stuff probably need to be audited to make sure they are safe for IPv6. For example, an IPv4 address is often stored in a 32-bit integer variable, but an IPv6 address does not fit.”
But just supporting a single address family, whether it’s IPv4 or IPv6, isn’t enough because doesn’t make it easy for Kubernetes to fit into all the other infrastructure it needs to integrate with.
“Kubernetes doesn’t run in isolation; it runs on-premise where it needs to interact with other applications, or — mostly — on cloud,” Henidak points out. “If I have an application and I expose it to the external world through a load balancer, I have an IP externally and if my Kubernetes cluster is IPv4 or IPv6, that address will be IPv4 or IPv6 only. If my network is using one address space and one address family only, then everything needs to be in the same family. If I have my node as IPv6 then the client for the node has to be IPv6, the database has to be IPv6 — and that creates problems because very few people are actually using 100 percent of everything as IPv6.”
The IoT customers using IPv6 for devices could run an IPv6 Kubernetes cluster for them to connect to, but as he points out; “but these clusters also need to connect to backend apps that are running on-premise or in another cluster or even to cloud services that only talk IPv4.”
Making that work without complex IPv4/IPv6 translation mechanisms in the network requires dual-stack networking, where each pod is allocated both an IPv4 and an IPv6 address, so it can communicate both with IPv6 systems and the legacy apps and cloud services that use IPv4.
Amazon Web Services has the broadest IPv6 support, but it doesn’t cover all services; 15 AWS regions have IPv6 support for EC2 instances as well as publicly routable IPv6 addresses for Elastic Load Balancing, access to S3 buckets through dual stack endpoints, IPv6 support for messages passing from devices into the AWS IoT service,dual stack support for public and private virtual interfaces with Direct Connect, IPv6 DNS queries and health checks for IPv6 endpoints with Route 53, and IPv6 support for CloudFront, Web Application Firewall and S3 Transfer Acceleration.
Most Azure regions have had support for dual-stack VMs via a load balancer since 2016 and there’s a private preview of full IPv6 dual-stack support; once that’s available, Henidak says it will be available with AKS and tools like AKS Engine (which uses Azure Resource Manager templates to bootstrap Kubernetes clusters on Azure IaaS.
“GCP does not have IPv6 support within VPC networks, yet,” Hockin told us but a Google employee noted in the GitHub discussion about dual stack support that the service is testing some prototype dual-stack configurations inside GCE.
Adding dual-stack support to Kubernetes needs more work than supporting IPv6 alone and the community has been working on a Kubernetes Enhancement Proposal for some time. Rather than taking the simpler approach of just providing dual stack addresses for pods and nodes but using a single family — all IPv4 or all IPv6 — for service IPs in a cluster, this is a true dual-stack implementation supporting IPv4 and IPv6 for both pods and services.
Because apps are exposed with fully qualified domain names for service discovery, the DNS in the cluster will need to be aware whether it’s running IPv4, IPv6 or dual stack (and IPv6 support in CoreDNS helps here).
The enhancement project will test dual-stack functionality with the Bridge and PTP CNI plugins and the Host-Local IPAM plugin. “There are multiple network providers with their own solutions based on the CNI stack,” notes Henidak. “They may depend on native Linux kernel features or on their own implementation and when Kubernetes moves to IPv6 dual-stack, they will need to provide own implementation of the dual stack.”
Ingress controllers like NGINX will also need to provide dual-stack support; in both cases, the network providers may already be supporting IPv6 in other environments but not have brought it to Kubernetes before because it wasn’t dual stack.
“The current climate of thought is that services will have an ID that says ‘I want to expose this as IPv4 or as IPv6’,” he explains. That’s a choice you’ll make for interoperability. “If I have Kubernetes with apps running in dual-stack mode, they have multiple IP addresses from different families; if I’m exposing this as IPv4 it means that clients that are only IPv4-capable will be able to communicate with this service, or the same for IPv6. That way people can get two IP addresses from the two different families pointing to the same app, which allows support for multiple clients. No matter what the client is capable of they will be able to ingress into the cluster.”
The intention is to add new functionality without disrupting existing implementations and deployments. “The main principle is not to break anything,” Henidak says. “There are tools written on tops of Kubernetes, there are plenty of clients calling the Kubernetes API and they expect certain input and output from the service. We want to move to the new works of dual stack without breaking any of this while allowing the maximum value for the clients.”
That means that the project will go through the usual alpha, beta and GA phases in future versions of Kubernetes to give time for testing — but that will take some time. “Dual-stack IPv6 support for Kubernetes is more or less design-complete,” Hockin says. “We have a very nice Kubernetes Enhancement Proposal for it, but that work has stalled a bit. It will be a few releases before that work is done, best case.”
If you need dual-stack IPv6 support before the changes reach a shipping version of Kubernetes (which Henidak estimates will take about another nine months), you can run your cluster in IPv6 mode and handle address translation by setting up stateful NAT64 and DNS64 servers to connect to external, IPv4-only servers, a dual-stack ingress controller that load balances to the IPv6-only endpoints in the cluster and stateless NAT64 servers with IPv4-to-IPv6 mappings to give IPv4-only external clients access to Kubernetes pods or services they expose.
The earliest adopters of the new proposal will be heavy IoT users, Henidak predicts. “I have customers who are saying ‘I want IPv6 and I want it now’. Those are the ones that will take it up immediately, even in the early stages to test it and provide feedback — and everyone in the community relies on those people.”
“Then there are people who are OK on IPv4 now but know that in the future they will need IPv6 who are keeping track of the proposal. They’re telling us ‘I’m running out of IPv4 space on my network; I already converted my on-premise network to IPv6 and I need dual stack to integrate with my cloud-hosted Kubernetes’.”
The Cloud Native Computing Foundation, which manages Kubernetes, is a sponsor of The New Stack.
Feature image via Pixabay.