Kubernetes’ Long Road to Dual IPv4/IPv6 Support
Portworx sponsored The New Stack’s coverage of KubeCon+CloudNativeCon North America 2019.
While you may thinking of Kubernetes as the future of computing, but it was, until recently, still stuck in the past in one way, namely that it was built on IPv4, the widely-used, though a soon-to-be-legacy version of the Internet Protocol upon which the internet was built.
The Internet Engineering Task Force has been long urging the internet service providers to move to IPv6, now that the world has exhausted the supply of 32-bit IPv4 addresses. With its 128-bit address space, IPv6 will offer an inexhaustibly supply of internet addresses.
“We ignored it,” admitted Tim Hockin, Google principal software engineer and core Kubernetes contributor, about IPv6, in a keynote talk at this week’s KubeCon + CloudNativeCon North America 2019 conference he gave with Khaled (Kal) Henidak, Microsoft Azure principal software engineer and fellow core contributor. Kubernetes was firmly locked into IPv4.
Fortunately, thanks to a robust corp of volunteers, the open source container orchestration engine is now ready for IPv6: The latest release of Kubernetes, version 1.16, will be the first version to have a dual IPv4/IPv6 stack, meaning it can interpret both types of Internet addresses. Getting there was no easy task, but a robust community of volunteers — including many engineers from companies that compete directly with one another — made it happen, according to the pair.
“Kubernetes is bigger than any one of the companies” who devote resources to Kubernetes, Hockin said. “Without ‘co-opetition,’ Kubernetes would not a fraction of what it is today.”
Pull Request 5,000 Lines Long
Kubernetes is, at heart, a networking technology, and all the bits of code tied directly to networking functions were hard-wired for IPv4. Nonetheless, a working group was formed to scour the code base and expunge all the IPv4 elements, a thorny task that nonetheless was soon successfully completed.
But that was only part of the problem. It was soon discovered that Kubernetes really needed was dual-stack IPv4/IPv6 support — meaning all its APIs should converse in either IPv4 or IPv6 packets. There would be legacy applications, or Internet of Things devices that still needed IPv4. “We made a bad assumption in that we would always need single IPs,” Henidak said
And this fix would be a major one. Kubernetes is nothing if not a networking technology, and it runs on APIs. And there were a lot of APIs touching the networking layer that needed revising: load balancers, nodes, services, endpoints.
“We had a problem that we never had before: How do we take a singular field and turn it into a plural field, and stay compatible with all the clients out there?” Hockin said. “This was going to be a bit of the problem.”
A team started working on Kubernetes’ bootstrap tools in October 2017, and by June 2018, they had a Kubernetes enhancement proposal (KEP), which detailed everything that needed to be changed, as well as the impact all these changes would have on users.
From this set of user requirements, the work then was handed to Hockin and Henidak to lead the team that would make the updates. They broke the task into three phases: egress (so the traffic coming out of Kubernetes could be in either IP/4 or IP/6), then ingress (so the clusters themselves would recognize both protocols) and lastly, to provide the dual-stack capabilities to all the supporting tools, such as load balancers and bootstrap tools. The work would not be easy to implement, Henidak, and they wanted to make the updates as painless as possible to the end-users. The changes would have to be backward and forward compatible.
Discussions were held across the teams for months about the best way to create a model that would work for all parties, involving lots of spirited discussions. and Henidak set off to do the fixes. The resulting pull-request was humongous: Just the egress phase alone, changing changed two core APIs — pod and node — and was over 5,500 lines of code. End-point, node, routing and other controllers were modified as well. Worse yet, the PR was sent over close to the code-freeze window of version 1.15.
“We had to think really hard about how to do those changes, and make sure they were correct. You don’t really want to break people’s clusters if you can avoid it. These are the sorts of pull requests that reviewers bring pitchforks to,” Hockin said.
But here is where the community stepped up. Not just a handful of volunteers stepped up to review the massive PR, but 18 people to help. The review was split into multiple pieces. “We went through the data crunch of review, fix, amend as a collective, as a community,” Henidak said. Ultimately, they decided to skip the next release, and instead aim for 1.16, and include the ingress as well.
So ingress and egress are now supported, but there is still work ahead. But because of this rally, Henidak and Hockin are positive about the future of Kubernetes.
“We were supported by a very strong group,” Henidak said.
KubeCon+CloudNativeCon is a sponsor of The New Stack.