API Management / Development / Kubernetes / Sponsored / Contributed

Commoditizing Kubernetes with the Cluster API

14 Jul 2020 12:00pm, by

Packet sponsored this post.

Gianluca Arbezzano
Gianluca is a Senior Staff Software Engineer at Packet. He is a full stack developer, a PHP developer, an open source contributor and a member of the Doctrine ORM Team.

Kubernetes wasn’t invented as a cute, new pet for your home. Think of it more like a cow.

We all know the difference between cows and pets, right? Cows usually don’t have names (I’m afraid I have to disagree with this idea, but let’s just go with it for this example), and pets are lovely animals that you treat like family members.

This is obviously a well-known metaphor to help give you a frame of reference on how we should look at containers and cloud computing. You don’t have to have your own pool of servers that you know everything about, including all their hostnames.

Cloud computing doesn’t care about your VMs. It cares even less about your containers, because they come and go. Kubernetes kills a replica of your application automatically if it’s a troublemaker — for example, you deploy a new version of your application that has a memory leak and it starts to use all the available memory that you assigned to the container. The container will go out of memory and Kubernetes will restart the application. The app comes back with a different hostname and there is nothing you can do about it. Sorry, that’s just how it works.

As anyone who has ever managed servers knows, the reality is that we’ve been treating servers like pets because we did not have the right technology to see them differently.

Let’s switch to another metaphor. Gravity is what holds our solar system together. It keeps everything in place and on the ground here on earth. We need it. But, if you were to increase gravity, it would make things much heavier. With increased gravity, it would be harder for us to walk and harder to move things. As gravity increases, so does the difficulty of trying to move. At some point, gravity becomes so excessive, it crushes everything. Let’s say we want things lighter and to move more easily. To solve this problem, we could go into a simulator or build a rocket and go to the moon.

In tech, it’s not that hard. Gravity builds and builds as applications and infrastructure grow. We add more technologies, we add more components, we add more data, and next thing you know our once responsive application or ecosystem is now weighed down and sluggish.

Assuming we’re not going to throw everything out and move environments to lower the gravity — metaphorically blasting our work to the moon — what can we do? Fortunately, all you need in order to level the gravity to an acceptable level again is the right tooling.

Before, the experience and tools required to treat servers and applications like livestock were limited to hyperscale companies, but that’s not the case anymore. We now have more options available.

The story repeats itself with Kubernetes today. I see a lot of companies that have well-known and consistently named Kubernetes clusters. They are stable as a rock, because the technology is new and they just onboarded it. The knowledge and the tooling required to make it a commodity hasn’t been developed, or isn’t easy to adopt.

I work at Packet, and I can tell you that gravity in tech is 100% based on external factors. It’s not like physics where you can describe it as a number. (In case you’re wondering, on earth gravity is 9.81 meters per second.)

In tech, gravity changes based on where you are, what you know and what you care about. For some companies, data has gravity because they haven’t figured out a good pipeline that allows them to move safely, quickly and cost-effectively.

Let’s take data centers, for example. Based on how well your company can build a supply chain and how effectively it cables racks or manages OS updates, the gravity of having servers feels different.

If you add hybrid clouds to this equation, gravity will be even less — because you will have a way to transfer it to somebody else, who will manage it for you in case it gets overly complicated (for example, outages).

The Cluster API in Kubernetes

The tools and operational experience in Kubernetes are improving. Big players are using it effectively and so are smaller companies with the right knowledge, which is proof of its efficiency at any scale. But a lot of companies are still treating the fleet of clusters as pets. This article explained why this happens, and now I’ll explain why the Cluster API will bring companies managing Kubernetes to the moon.

I recently had the opportunity to write a cluster-API for Packet. This is how the Kubernetes community that built the Cluster API describes it:

“The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.”

You can see it as a way to kubectl apply an entire cluster. It can be a single node one, or a multi az, multimanager one.

It uses the same building blocks used by all the other resources developed with Kubernetes: reconciliation loops and controllers.

Everything starts from a Kubernetes cluster called “Management cluster.” It has to be up and running, and it is the father for all the other ones you have to create. This is meta, but this cluster allows you to create other clusters via kubectl apply as you do for any other resource — like pod, service, ingress or deployment.

From this cluster, you apply manifests that will describe and create new clusters. The management cluster runs many custom resource definitions provided by the cluster-api project: Machine, MachineSet, MachineDeployment, KubeadmBoostrap, and many more. Some of them are new concepts. Others come from ideas we already know — pod, replicaset, and deployment — but applied to machines (servers).

Servers can run everywhere; every infrastructure provider (vSphere, Packet, AWS, GCP) has its cluster-api-provider implementation. In practice, it turns the “meta” object (Machine, MachineSet, MachineDeployment) to something more concrete, usually called AWSMachine, PacketMachine, etc, based on your cloud provider of choice. It has properties that we all know, such as the operating system, AMI, instance_type, and so on.

I am staying a bit “meta” on purpose, because it is not the goal of this article to explain how cluster-api is implemented or how cluster-api-provider works; those topics deserve their own articles.

The cluster-api-provider takes the concrete PacketMachine or AWSMachine and starts a controller that plays continuous reconciliation loops until the cluster is provisioned. This is the same way the creation of a pod works.

The cluster-api documentation uses a lot of graphs to describe the complex set of steps and processes in place to guarantee a similar behavior across different providers. As I said, the cluster-api is a framework; every infrastructure provider has to glue it with its own infrastructure API.

This is the very definition of a commodity: something that you can replace quickly and find anywhere, maybe with a different flavor but still as useful.

The Cluster API implementation for Packet is still work in progress, but it is at a point where you can try it out. Check it out, let me know on twitter @gianarb and share your pain with your favorite Gophernetes!

Feature image via Pixabay.

At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.

A newsletter digest of the week’s most important stories & analyses.