Kubernetes Federation in a Post-Configuration Management Universe

When containerization was young, one of its early principles was the ideal of immutable infrastructure, the ability to build a support structure for a container that was flexible enough to meet the container’s needs during its lifespan, which may be short, but remained a fixed asset throughout that duration.
It spoke to a possible, complete outmoding of one of IT’s most critical functions — configuration management — and a skill upon which enterprises dearly depend upon today. While the leading vendors in the space began discussing evolutionary adaptations, the practitioners in the space rallied together, forming a kind of global support group to keep hope alive through the evolutionary maelstrom.
At a Thursday morning session at KubeCon Europe 2017 in Berlin, Kelsey Hightower — now contributing to the Cloud Native Computing Foundation as a developer, and a lead developer of Google Cloud Platform — called back to attendees’ minds the original goals of containerization, reminding developers that the ideal used to be focused on distributed systems rather than configured entities.
The Get-Together
Hightower’s topic was cluster federation — the pooling together of multiple clusters. It’s a feature of CNCF’s Kubernetes container orchestration engine that was introduced last year, which not only enables even distribution of workload pods (or a collection of containers) across clusters, but the management of those pods across cloud providers and local cloud platforms.
“It’s one of these features that confuses the hell out of everyone,” said Hightower. “Every once in a while, a new technology shows up, and everyone gets super-excited. When federation was announced, we called it ‘übernetes.’ Everyone lost their minds. They were like, ‘Ah! We’re gonna have true hybrid cloud!’
“How many people still use that term, ‘hybrid cloud?’” he continued. “It means absolutely nothing. There’s no such thing as hybrid cloud. Either your cluster is in two places or one place. That’s all it is; there’s no hybrid cloud for this.”
Hightower went on to characterize cluster federation as one layer higher than node federation, which is essentially what Kubernetes has always done, and what it does every day. For a four-node cluster, he said, any developer or operator would naturally want to secure access, would want visibility into what’s running, service discovery, and last though certainly least, resource management.
@kelseyhightower on #kubernetes cluster federation "keep it simple, make it useful" #KubeCon https://t.co/RmSOgXo6Ft
— Wendy Cartee (@Wendy_Cartee) March 30, 2017
“Before we got into this world of Kubernetes and distributed systems, most people, in the real world, this is what they did: configuration management,” he told attendees. “We took all of those machines, and we tried to normalize them in some way.” To define the operations of a cluster, he said, essentially meant constructing the configuration of each node, and then pooling together nodes that were configured identically.
“So what you evaluate what we got from configuration management systems, I think we fell short,” he went on.
Node federation, made possible through APIs, makes it feasible for Kubernetes to address multiple nodes as though it were a single machine. “If you believe that node federation, on a tool like Kubernetes, should exist,” Hightower argued, “then I think the same situation will apply when you have multiple clusters. Cluster federation — if you keep it simple, might actually solve a problem.
“So what problem can I solve? We introduce a second cluster. It’s like, ‘I don’t need node federation for that!’ So you can break out your bash scripts, do for loops over all your API servers. Where are you going to store all the states? You can break out a spreadsheet, and decide what objects belong in what cluster. And over time, you’re going to invent some configuration management-like thing for Kubernetes clusters. It’s inevitable. You’re just going to repeat what you did for multiple machines, for multiple clusters.
“I say, we skip that part. It’s going to be painful. You know what the result of configuration management was? DevOps. Good therapy for inefficient tools. I don’t want to know what DevOps would look like if people went to multiple clusters.”
Thor’s Hammer
Vendors in the tools business portray DevOps as an inevitable development or a social evolution of organizations that must take place lest they perish at the hands of their competitors who evolved faster. But Hightower has picked up a neglected gauntlet and thrown it right back down, forcing us to remember where we were when we started this journey weeks, even months, ago. If simplification is the goal, then indeed it may be pointless for communities to rally around complexities just to keep hope alive for the old skills.
Hightower advised a scenario where another API server is assigned the role of speaking for the cluster as a whole, for situations where multiple clusters co-exist. Such a co-existence requires standardization, lest each cluster end up speaking its own language, and we end up building barriers rather than bridges. He warned against baking the API into the server, which he said was a mistake Kubernetes made once before. That made developers have to recompile code every time.
“We should avoid that mistake with federation,” he said. “Let’s just treat federation like a special client of multiple clusters. And if we do that, we can keep it simple.”
One example of an application that requires the use of multiple clusters would be a web application that requires components to run in two cloud availability zones — perhaps each one close to its own specific user. Another may involve a single cluster whose provider’s SLA limits the maximum number of nodes to which the application may scale, forcing it to seek space on a cluster elsewhere.
“You’re going to have to run multiple clusters at some point,” Hightower predicted. “And we do recommend running one cluster per availability zone, versus trying to stretch your node pool across multiple availability zones.”
He did warn that the concept of joining multiple clusters into a single federation could set developers up for “a great, epic post-mortem” — the kind that certain journalists write headlines about. It could be an easy concept, but certainly not risk-free.
It could get tied up in esoterics pretty fast, however. Doing away with configuration management was easy when it appeared containers would always be stateless by design. Now, containerized applications deal with persistent volumes and stateful conditions, especially those that use etcd for their stateful stores (a project with which Kelsey Hightower has personal experience). What happens when a system needs to federate stateful applications across clusters?
He proceeded to address his own question by presenting a demo of cluster federation in action, which involved the creation of a global NGINX load balancer — essentially the node federation model, scaled up.
@kelseyhightower : federation solves the problem of multiple clusters, not of differences between clouds #kubecon
— Ben Vickers (@bvkrs) March 30, 2017
But even the demo was not a complete response to that question. There will be issues of context, and resolving discrepancies in real-time. Yet his point was that a configuration management approach — the application of scripts and monitoring tools for this task — would lead organizations in the wrong direction. And at this larger scale, the implications of such a direction might be more dire than for nodes within a single cluster.
“There’s a discussion required,” said Hightower. “What’s missing from this discussion is more people who have hands-on experience with federation. There’s a repository that can help you get started with federation, but we need more feedback before we make crazy decisions on that last call. Because it would be bad for them.”
The Cloud Native Computing Foundation is a sponsor of The New Stack.
Feature image: Kelsey Hightower speaking at KubeCon, taken by Dieter Reuter, a senior consultant with Germany-based DevOps firm Bee42.