Loft Labs sponsored this post.
Everyone who’s tried running Kubernetes with more than a few clusters knows it can quickly become expensive. With every cluster, you are adding more cost with some costs included no matter how your clusters are set up. One obvious cost is the control plane, but there are always a few more costs, such as the extra API server that every cluster needs, which doesn’t use resources in itself, but it takes up resources from the existing pool. Or maybe you need a separate load balancer for each cluster instead of sharing a single one between applications.
In this article, we’ll explore this topic in more depth, along with how the extra cost of multiple clusters can be reduced or eliminated by using virtual clusters.
What Are Virtual Clusters?
In short, virtual clusters are to Kubernetes what virtual machines (VMs) are to bare metal hosts. Within one cluster you can create new virtual ones. Like with VMs, you’ll get all the functionality you are getting with a direct host, with a few limitations.
Before diving into the limitations, let’s first look at what a virtual cluster is and how they work. First, you need to understand why you would use virtual clusters in the first place. While the reasons may differ from organization to organization, there are some common reasons.
A major reason for using virtual clusters is if you are already running a lot of small clusters. Many organizations are using clusters to improve the developer experience. Rather than keeping Kubernetes locked away as a black box for the developers, companies are exposing developers to Kubernetes directly, both as a way of increasing their comfort with the technology, but also to increase developer velocity as they now know exactly how their applications will run.
When it comes to development, a cluster is as personal as a developer’s machine. You never know what your colleague is doing or testing, so you want to make sure that whatever they’re doing doesn’t affect you. This is a classic example of using many small clusters, but it’s also an example of how cost is being driven up by good developer experience.
Virtual clusters are a way to keep developer experience and velocity high while keeping costs low, but more on that in the next section. Now that you understand why you need it, it’s time to understand how it works. Below, you can see an overview of how the popular tool vcluster has implemented virtual clusters. Explanation follows below the diagram.
Looking at the bottom of the diagram you can see the
Host Cluster. This is the cluster that’s running in EKS, GKE, AKS, or wherever else you are running Kubernetes. It’s a standard cluster. On top of this, you have the
kube-system namespace. Again, this is completely standard, and so far there’s nothing virtual. The virtual part comes when you move a step up and you see the
ns-2 namespace. These live inside the
When you create a virtual cluster, you either use an existing namespace or create a new one. Typically, you are creating a new one. This namespace will then contain a few pods. These pods then contain a “new” cluster you can connect to. This “new” cluster then has its own API server, meaning you can interact with it as its own cluster.
When you want to use the virtual cluster, it’s as simple as running a
vcluster connect command, which will result in two things. It’ll start port forwarding to the port of the API server inside your virtual cluster, and it will create a
kubeconfig.yaml file. You can use this with
kubectl to execute commands inside your now-virtual cluster. (Later in this article, you will get a quick-start guide on how to set up
vcluster for yourself, so no need to worry about that right now.)
To keep this article relevant to cost, you can read more about the details of virtual clusters here.
Saving cost with virtual clusters comes from multitude factors. Mostly it comes from the already well-thought-out capabilities provided by virtual clusters, not from a specialized focus on keeping costs down.
First of all, you are going to save the cost of the control plane. The savings will depend on how many clusters you are running. If you are running on GKE, you will be saving $73 per month per cluster you are replacing. On top of this, you are also saving money from the previously separated resources that can now be shared. Something like a load balancer can now be shared instead of paying for each cluster.
The second cost-saving benefit comes from the ability to dynamically scale your Kubernetes clusters. Autoscaling in Kubernetes is by no means a new thing; in fact it’s one of the biggest selling points of using Kubernetes. However, autoscaling the number of actual clusters in use is not something that’s native to Kubernetes. With virtual clusters, you can spin up and dispose of clusters within seconds, allowing each developer to have multiple clusters or none, depending on what’s needed at any point in time.
Saving costs by shutting down an unused cluster can be effective, but it can also be tough to manage. Especially if it’s meant to be very dynamic, like shutting it down when a developer goes home and spinning it up when they get back into the office the next day. While possible, there are a few issues with this. First of all, it can be annoying. Second, there will no doubt be times when developers forget to shut down unused clusters. A developer might get distracted by a bug or simply forget this step in their routine when they go home.
That’s not to say the principle can’t be used effectively, though. With Loft’s sleep mode, your clusters can be put to sleep automatically after a period of no use. This way you can save on up to 76% of your Kubernetes spending, given that a developer works a normal 40-hour workweek.
If you want to see more about how virtual clusters work and what benefits they can provide, you can check out the official vcluster website.
Setting Up Virtual Clusters
You’ve come to the realization that virtual clusters make sense for you and your organization. How do you proceed from here? What follows here is a quick-start guide. If you want more detailed instructions, you can take a look at the official documentation. Still, getting vclusters set up is, in fact, as easy as detailed here.
The first thing you need to do is to download the vcluster CLI:
$ curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | \
sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | \
xargs -n 1 curl -L -o vcluster && chmod +x vcluster && \
sudo mv vcluster /usr/local/bin
Once the CLI is installed, you can create a virtual cluster by using the
vcluster create <vcluster-name> -n <host-namespace> syntax. Like so:
$ vcluster create vcluster-1 -n host-namespace-1
Now you’ve got your own virtual cluster that you can connect to by running
vcluster connect vcluster-1 -n host-namespace-1. No more work is needed for you to get started, and at this point you are working with your newly created cluster.
Now you know more about virtual clusters in general, how you can use
vcluster to implement them and how it helps you with cost. By consolidating all your small clusters into one big “host” cluster, you’re saving the price of every control plane. On top of this, you’re saving even more on your costs since now more resources are being shared across the board, rather than being spread out.
Combine the above with Loft’s sleep mode, and you could potentially be saving over two-thirds of your current Kubernetes cost.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.
Photo by Azim Islam from Pexels.