Why use a whole Kubernetes cluster if that’s more than you need?
K8Spin, a project offering multitenancy on Kubernetes, offers a way for companies to parcel out resources on a cluster to different departments or teams.
“This whole idea started about creating a service that allows you to share the Kubernetes cluster between many, many people. And each of them have a small piece of this cluster,” said Angel Barrera, Kubernetes engineer at SIGHUP. Barrera created K8Spin with Pau Rosello, solution engineer at managed Kubernetes provider Giant Swarm.
“Basically we wanted to avoid proprietary interfaces, like many of the service providers out there. And we wanted people that already know about Kubernetes to be able to host their small applications without really caring about the whole cluster,” he said.
Going Open Source
The two freelancers based in Spain initially offered K8Spin as software as a service, but more recently closed that service and instead made it an open source project, with an eye toward eventually becoming a Cloud Native Computing Foundation (CNCF) project.
Kubernetes wasn’t designed to be multitenant, they say, though it can be accomplished, though there are many levels where you need to change or modify Kubernetes to allow multiple people to share the same cluster.
“But we didn’t want to modify the code of Kubernetes. What we basically wanted to do some service on top that is automatically going to configure all these objects on top of coordinators, like limit ranges and network policies,” Barrera said.
Using the K8Spin Operator, boundaries for resources such as CPU and Ram can be set on three levels: Organizations, Tenants and Spaces. A cluster administrator manages the cluster for the overall organization, setting resource limits and assigning roles and privileges. The Tenant administrator does likewise for that group, which could be a team or department. The Tenant also hosts Spaces, an abstraction layer on top of a Namespace, which have their own quotas and roles.
Every user has a Space completely isolated from other users, with a cap set for available resources, preventing any user from hogging resources allocated for someone else.
K8Spin manages all the underlying components of a Namespace, such as master node configuration, internal and external SSL/TLS certificates, load balancers, etc. To do so, it harnesses Kubernetes mechanisms such as Network Policies, ResourceQuota and LimitRange. For added security and isolation, it relies on gVisor, the container sandbox written in Go developed by Google. It includes an Open Container Initiative runtime called runsc that provides a boundary between the application and the host Linux kernel.
“The technology that we use on top of Kubernetes, it’s not something that we have invented. Right? The only nice part about K8Spin is that it’s going to manage all of this for you,” Barrera said.
Added Rosello: “What we want to provide is a super-easy way to provide a configured environment or auditable environment — to define the environment, say, for the same company, or even different companies. We do know how to do it manually. It’s a pain because sometimes you forget something and things break. So to provide a super-easy way to
properly configure your teams, your job, an easy experience for developers.”
Different Teams, Environments
It also has multitenant proxies for Prometheus and Loki and a multitenant operator for Grafana. Earlier this year it integrated with oneinfra, which provides a one-click Kubernetes control plane and the two projects continue to collaborate.
“K8Spin is a super powerful open source solution to enable multitenancy in a Kubernetes cluster. It gives any organization the ability to easily compartmentalize a given cluster — from RBAC rules, to quotas, going through network policies —
exposed with API types that are really easy to set up and operate,” said Rafael Fernández López, software architect at SUSE and creator of oneinfra.
Barrera said K8Spin is not yet production-ready, but it has customers trying it out and providing feedback.
It’s not the only open source project offering multitenancy on a Kubernetes cluster. Others include:
“I would say our solution can be maybe the simplest one in the sense that other solutions try to do other things … they try to also manage deployment of applications and other things. Based on my experience, and also the customers that we have been talking about, they already have their own way of deploying things,” said Barrera.
“Usually when they are missing is how to isolate different workloads. For example, there’s many people that’s running the same service over and over for multiple clients, multiple customers … connected on the same network inside Kubernetes. And now they need to isolate one from the other. So they just need that small piece. And this is what we try to provide. We don’t try to like redo the whole Kubernetes experience.”
Some companies have had to build some king of multitenancy in-house for Kubernetes, but they don’t want really want to do that, Barrera said. They’d really rather use something from the community that has been tested.
Going forward, among the features the team would like to implement is per-minute billing of individual resources, not in the sense of money, Barrera said, but applying analytics so that if a team has Spaces for different environments — say, test, QA and prod — to provide more visibility into their usage for each.
“I think we have lots of ideas still to implement here, but in the end, we are also going to listen” to users, Barrera said. “I already know what some of these companies are doing on this space, [so] we have a few ideas from where our project can fit.”
Some of these companies bring something totally new to the table.
“That was a little bit eye-opening in a sense that this can be used for many, many other things. Like for example, they wanted to test and … didn’t want to impact other tests they’re running in the same cluster. So it has lots of different uses that we are not even thinking of right now.”
The Cloud Native Computing Foundation is a sponsor of The New Stack.