The newest version of the open source Kubernetes container orchestration engine — version 1.11 will go live Wednesday — includes a number of significant changes, including the ability to dynamically configure a Kubelet, a new sub-project for in-Kubernetes DNS, and new capabilities for custom resource definitions (CRDs). These changes are indicative of the long-term plans for Kubernetes, in that they focus on spreading the project’s development out, and improving Kubernetes’ ability to deal with custom and enterprise resources.
This release is part of an on-going effort to rewrite much of the underlying plumbing of Kubernetes, said Josh Berkus, Kubernetes lead at Red Hat. For the past few releases, the internals of the platform have been getting a good going over to ensure more stability and modularization. One example of increased modularization in this release is the CoreDNS project.
CoreDNS replaces KubeDNS with a dedicated sub-project staffed by its own team. This will allow CoreDNS to evolve on its own outside of the Kubernetes core team, which will now be exclusively focused on the core functionalities of the platform.
“CoreDNS is just another generational update. It’s a single Go binary. It’s smaller, simpler, and more reliable. I would expect people transitioning to CoreDNS for the next several releases. It’s a separate project with its own team at the CNCF, and the advantage is that there’s actually a team dedicated to maintaining it instead of the Kubernetes special interest group,” Berkus said.
Elsewhere in this release, Kubelets are becoming easier to maintain thanks to the addition in Kubernetes 1.11 of dynamic Kubelet configuration capabilities. Previously, when a change was made to a specific node, it would require a complete restart of that node, or Kubelet. With the release of Kubernetes 1.11, configurations of the underlying Kubelet can be made without a reboot required.
“The Kubelet is the demon running each system. The Kubelet downloads and runs whatever’s needed for that particular node. Until now the way to change the config of the Kubelet was to restart it with different flags using SystemD or some other facility,” Berkus said.
There were a couple problems with this approach, he relayed. First, it means the process is completely out of band for Kubernetes configuration. The second problem is that the user has to restart them all even for minor changes, and rolling restarts across your whole Kubernetes infrastructure are not fun for anyone.
“Now you can change certain aspects of your Kubelets configuration by changing configuration files. We’ve switched from a command line to configuration file,” said Berkus. He went on to add that this is particularly useful to users who run Kubernetes on bare metal.
For users requiring more customized Kubernetes experiences, the CRD has allowed teams to extend the functionality of their Kubernetes installations to include custom resources. If you define something as a CRD, explained Berkus, it then becomes a Kubernetes object. This means it can be controlled through the Kubernetes interface.
“People are creating their own CRDs with their own controller, and their own components on the back end,” Berkus explained. “This allows Kubernetes to control, for example, KVM virtual machines. You have this Kubernetes object with its own reporting, and it takes commands; some are the same some are different.”
“We have been working on turning everything in Kubernetes anyone would want to use into an API,” he said. “If someone is turning Kubernetes into an HPC platform, those people are going to want a bunch of things people running Kubernetes on their web host are not. Having this extension mechanism allows us to fill in a lot of functionality.”
In version 1.11, CRDs received support for end-point monitoring, and the ability to version CRD resources. This means CRD extensions to Kubernetes can now provide their own telemetry about status and resource consumption. Users will also be able to keep better track of their CRDs as they are versioned, and rolled out in rolling upgrade fashion across clusters. Berkus said that, in the future, teams will likely run multiple CRDs in their clusters, so the tools will need to be in place to manage, control, and monitor those resources.
The Cloud Native Computing Foundation is a sponsor of The New Stack.
Feature image via Pixabay.