Nothing about what’s happened in the nearly one year since Kubernetes 1.0 was announced, should come as a surprise to anyone. Google has fostered an ecosystem around Kubernetes in much the same way Docker Inc. has built an ecosystem around its containerization platform.
From the moment Google Cloud chief Greg DeMichillie marched on stage at O’Reilly’s OSCON last year to music whose lyrics included, “We’ve got strength in numbers and they’re gonna pay for it,” Google has broadcast the not-so-subtle message that Kubernetes is out to take charge of the orchestration space.
So is Kubernetes now the center of the container ecosystem?
“The center? That’s a pretty gracious word,” responded Tim Hockin, Google’s senior software engineer and a co-creator of Kubernetes, before appending his comment with, “I’d like to think so.
“I think we’ve been pushing the limits of what we’ve been doing,” Hockin continued, in an interview for an upcoming edition of The New Stack: Context podcast. “I think we’ve introduced a bunch of ideas that have been generally well received. That said, there’s a lot going on in this space, so it’s hard to define the center. It’s like the center of the universe. Everything’s moving away from each other, all at the same time… except in a lot of places, we move towards each other too.”
Last month, ClusterHQ, the company behind the Flocker container volume manager, asked 214 developers which orchestration tools they used for their container environments. They were given a list of five major choices, plus “Other,” and asked to choose any and all that applied. Kubernetes was chosen by 43 percent of respondents, beating Other by about 4 percent. And when asked to choose the single orchestrator they used most frequently, some 27 percent of respondents cited Kubernetes.
One competitive approach IT companies may take is to make their competitors focus on one element of a system, and then systematically rendering that element irrelevant. In the containerization space, Google now presents Kubernetes as a more and more agnostic platform for staging and managing workloads, no matter the container. But opening a popular and efficient platform for alternatives such as appc, and whatever else may come along, has the side-benefit of rendering format a less relevant issue.
“The de facto way of doing networking in containers when Kubernetes entered the space,” explained Hockin, “was sort of the Docker way. You build your network namespace for each container, and they have a private IP address and you map some ports on your host into ports on your container, and through some sort of copy process — whether that was iptables or userspace or something else — you route traffic that way.”
Hockin then referred to Google’s famous April 2015 white paper [PDF], detailing the Borg project it had developed in-house, and how the lessons learned from that project led to Kubernetes. It reads in part, “One IP address per machine complicates things. In Borg, all tasks on a machine use the single IP address of their host and thus share the host’s port space. This causes a number of difficulties: Borg must schedule ports as a resource; tasks must pre-declare how many ports they need, and be willing to be told which ones to use when they start; the Borglet must enforce port isolation; and the naming and RPC systems must handle ports as well as IP addresses.”
So long as applications or services do not have to rely upon anything specific to the architecture of the system in which they are being staged and orchestrated, to make contact with other services and begin doing business with them, the orchestration layer becomes a very effective layer of abstraction between them.
Hard-wiring IP addresses to containers is “even harder for open source stuff than it is for Google internally,” explained Hockin, “because you can’t willy-nilly change open source software; you have to work through the open source process and try to upstream changes. Us coming out and telling people, ‘This is how you have to write your software; this is how you do your networking,’ was never gonna fly. So we took a different approach, and did the thing we always wanted to do.”
Kubernetes resolves this problem, he continued, by delegating “real” IP addresses to pods (clusters of containers) — addresses that are subject to maintenance and control by real Internet services such as DNS and DHCP. The scope of these addresses is not limited to the container cluster, so service discovery is just as effective with addresses for pods as with those for VMs elsewhere in the network. “All of these addresses can talk to each other without having to traverse different layers of existence,” he said, “without having to be translated between port numbers.”
For developers, this means that port numbers can be hard-coded — which, Hockin explains, was how they were designed in the first place. Port 443, for example, should always be reserved for incoming SSL Web traffic, not arbitrarily delegated to some container for a designated purpose, and subject to change at random.
When Internet addresses are freed to operate as they were originally designed, Google’s engineer believes, applications in distributed systems can use traditional service discovery methods to locate one another, and exchange data and messages.
Here’s where the windfall comes in: So long as applications or services do not have to rely upon anything specific to the architecture of the system in which they are being staged and orchestrated, to make contact with other services and begin doing business with them, the orchestration layer becomes a very effective layer of abstraction between them. Their architecture matters less and less to one another, in a system that enables them to be whatever their developers made them out to be.
Currently, Kubernetes architects are endeavoring to take full advantage of this freedom, by testing a feature linked to DNS that they call pet sets — released in version 1.3, though with multiple warnings that it’s to be treated as an “alpha.” If the name reminds you of the now-well-worn “pets vs. cattle” analogy, that’s by design. Their idea is, a DNS host name may be assigned to a subset of Kubernetes pods. This name serves as a collective identity for these pods, as they’re used in clustered applications.
“Most of the database systems that we’ve looked at need or want some sort of persistent name — some persistent identity,” explained Hockin. “That identity is often captured in the form of an IP address, but it doesn’t have to be. We think that, in alignment with the rest of the networking model, relying upon a persistent IP address is sort of coupling yourself in a way you don’t really want.”
As distributed applications are migrated between data centers and through clouds, IP addresses tend to change. Kubernetes already offers stable DNS names and virtual IP addresses, but a stronger identity mechanism could conceivably enable access to persistent storage without the infrastructure having to invoke networking plug-ins.
“With a pet set, you get a DNS name that is attached to your slot in a quorum,” said Hockin. “You get a storage volume that is attached to that same slot, so if your container dies and comes back, you get the same identity and the same storage. So if you had saved information that had your name, or something like that, attached to it, you can very easily re-create that. We’re using this to support things like MySQL Galera and Cassandra.”
As a result, the containment format for workloads is mattering less and less to Kubernetes. No longer should a development team concentrate its efforts on one format, the argument goes, to achieve an important objective such as microservices that utilize persistent databases. If the orchestration system proves its case in the coming months, it could begin rendering all discussion and debate about the relative superiority of container formats — at least insofar as orchestration is concerned — irrelevant.
“I think the format isn’t really that interesting to Kubernetes,” remarked Hockin at one point. “We really want to be the Switzerland of container technology. We don’t really care whether you push your stuff out in [CoreOS’] rkt, or in Docker’s repository, or in somebody else’s repository, or whether there’s some other alternate format that you want to use. We’re welcoming to all those ideas. I don’t think that’s where the interesting part of orchestration is. I know that it’s important to users; it’s very important [with regard to] how these technologies work, and how they fit within their enterprise. I don’t think it changes how we orchestrate.”
Second Stage Take-off
For some time now, CoreOS has been producing Tectonic, a commercial Kubernetes implementation. So Google certainly has been no stranger to CoreOS, though, in recent months, it may have actually been getting closer. While Kubernetes contributors in earlier months downplayed the need for plug-ins in their own environment, more recently, the orchestrator has warmed up to the use of Container Networking Interface (CNI) plug-ins, using the specification derived from CoreOS’ rkt. This has led to Kubernetes being able to adopt rkt as what Hockin describes as a “first-class container system” in the orchestrator’s sphere of influence.
In a CoreOS blog post last April, rkt contributor Derek Gonyeo explained: “Using a specialized stage1 image, rkt controls the download, cryptographic verification, and execution of the kubelet node manager on each CoreOS cluster member. This decouples the cluster orchestration software layer both from the underlying operating system and the containerized application layer, allowing easier and more frequent updates to all of the components, increasing agility and security for DevOps teams.”
“It really proves that we can do more than one,” Google’s Hockin told us. “I was a little bit worried that Docker had its roots too deeply in our code base, but we were able to extract them — amazing work there.”
What the open source nature of modern architectural development has proven, beyond a shadow of a doubt, is that developing a system just to sustain itself and its own brand no longer pays off. If a better way to accomplish something is feasible — even if it has not yet been proven in a production environment — then open source developers will deconstruct and deprecate what’s no longer relevant to their goals, to advance the greater objectives. And this is how Google is framing the evolution of Kubernetes; it is evolution as Google wants us to perceive it.
But as Oracle has proven so often in its history, long-entrenched methods are the ones with the best chance of survival in enterprise data centers. Now that there is clearly more than one way to name the proverbial pet, to borrow Google’s metaphor, it will be up to the Kubernetes competitors to dig trenches deep enough to survive this next round of abstraction, spun up by an absolute master of the form.
Title image: “The Gulf Stream” by Winslow Homer , from New York’s Metropolitan Museum of Art, in the public domain.