The Kubernetes project has been hurtling at breakneck speed towards the boring. As the popular open source container orchestration platform has matured, it’s been the boring features which have come front and center, many of which focus on stability and reliability. For the Kubernetes 1.12 release on Thursday, those working on the project and on the various special interest groups (SIGs) initially laid out over 60 proposed features. A little over half of those made it to the final release, with many more being pushed back or delayed, as usual.
Amongst the changes that made it into this release are such additions as the general availability of TLS bootstrapping, the ability to use the Kubernetes API to restore a volume from a volume snapshot data source, a newly beta version of the KubeletPluginsWatcher, and some groundwork which is being put in place to solve scheduling challenges that confront large clusters.
Stephen Augustus, specialist solution architect on the OpenShift Tiger Team at Red Hat and Kubernetes product management chair said that the name of the game for Kubernetes these days is being boring, and avoiding breaking changes.
“We’re really moving past this idea of things that must be in core Kubernetes. Whether it’s looking at CoreDNS versus Kube-DNS, or some other component, we’re moving into this phase of choose-your-own-adventure Kubernetes,” Augustus said. “We’re starting to dial into the different combinations across runtimes, network, storage, and DNS providers; that’s the new story. We’ve been saying the common theme now is that Kubernetes is stable. That said, every release moving forward should be a ‘boring’ release.
“We’ve successfully nailed down the core functionality of Kubernetes over the 1.3 through 1.12 release cycles. We’ve delivered all of that core functionality. The next step is to take that and make sure it’s stable, continually improve the test suites that ensure that stability and continue to vet out the scalability issues we see in new enhancements. I think CoreDNS is currently the exemplar, which highlights that new challenge: trying to define the picture of what stability looks like in an ecosystem that now allows you to pick and choose what the base components are for it,” Augustus noted.
TLS Reaches GA, Adds Excitement
Indeed, to find excitement in this Kubernetes release, one must focus on the items within it which are still evolving and being promoted from alpha through GA. New features around TLS are one such example in 1.12. In Kubernetes 1.4, TLS bootstrapping was first added as an alpha feature. Since that time, users have been clamoring for this day 1-focused capability to reach maturity. As of version 1.12, this feature is now generally available and should make it easier to get up and running with a cluster.
There are additional TLS management capabilities that continue to mature, however. TLS certificate rotation is one of those features, and in Kubernetes 1.12 it reached beta.
One push for this release that was delayed at the last minute was the shift from Kube-DNS to CoreDNS as the default DNS service. CoreDNS had been introduced to address security and scalability issues discovered in other DNS add-ons, but recently, issues with CoreDNS running inside larger-scale (1,000+ node) clusters inside Google Cloud Engine have emerged. While the bug in question wasn’t one that most users would have noticed in their logs, it was enough, said Tim Pepper, senior staff engineer at VMware and lead on the Kubernetes 1.12 release, to delay the shift to making CoreDNS the default provider with this release.
Another major theme for this release is better cloud provider integrations. This release, in particular, contained numerous changes designed to help Kubernetes run on Microsoft Azure, for example. To track the various changes and ensure code and process consistency across cloud providers, the Kubernetes team has built a new SIG called the SIG Cloud Provider.
Pepper said that the work now going on in this SIG is aimed at finding solutions across cloud implementations, instead of simply patching a bug on Azure, or Amazon, or Google, for example. He said the teams involved have been getting better at finding solutions in the Kubernetes code base that extend beyond their specific clouds.
Vendor Conformance Remains an Issue
With all those cloud providers and distributions out there in the Kubernetes marketplace, an effort is now underway inside the CNCF to build out a way of measuring conformance. Said Pepper, “There’s a very large effort in conformance now asking the question, ‘What should a Kubernetes cluster look like, and what rules does it need to adhere to to be compliant?’ I can see from a performance documentation and functionality perspective, there’s very much an attitude of ‘Let’s solve for storage. Let’s solve for compute. Let’s not just solve them for our cloud provider, let’s do it in a consistent manner so it can be implemented the same across all cloud providers.’”
And while conformance is still a work in progress, one can envision a world where cloud providers and Kubernetes distribution providers can show off CNCF certification stickers, proving that their implementation meets the requirements that will be laid out by the project. This type of work has already been performed by the Cloud Foundry PaaS, and is primarily driven by that project’s non-profit management foundation.
Author Alex Handy works for Red Hat; this post has been independently commissioned.
The Cloud Native Computing Foundation, Red Hat and VMware are sponsors of The New Stack.