Vcluster to the Rescue

The hype has kicked in and you have finally created a Kubernetes cluster on your favorite cloud provider. A couple of developers have started using it, but everyone is deploying to default and it is starting to get cluttered. The first request comes in from a developer who wants their own cluster. That is usually how the sprawl begins.
Before you know it, you are spending large amounts of money on clusters, storage and other services offered by the provider. When you get large enough, you may start hitting the soft limits used by cloud providers to keep accounts from growing too large. There has to be a better way.
Is There a Solution that Works for Everyone?
A few years ago, I saw this first-hand. The cost was becoming enormous and the ability to manage the cluster life cycle across hundreds of clusters was basically impossible. Everyone was working on a different version of Kubernetes, and there were at least five clusters named “my-cluster” created every week. We had production-level spend on development clusters that were primarily used to support open source projects.
Our team had discussions about what we could do. Should we create a large cluster and place everyone in their own namespace? Should we have a cluster per team and let them manage it? Should we recommend using Kind and have everyone run their testing locally? We ended up doing a mix, but it didn’t really fix the problem. This was 2020, and we didn’t have some of the tools available now.
Vcluster to the Rescue
Enter vcluster, which makes multitenancy a lot easier. Our developers want to feel like they have their own cluster while our platform engineers want to manage as few clusters as possible. There will be a namespace per developer or team, and they will have the ability to deploy virtual clusters, which will appear to them as their own cluster. Now, when everything is deployed to the default namespace, there won’t be overlap.
Getting Started and Requirements
In this article, we are going to turn a Linux server into a K3s-based Kubernetes cluster, install vlucster and have a working development environment that can be used for testing. Most of the concepts we discuss will translate to cloud providers. In fact, most of the cloud providers make many of the steps easier by providing you with an ingress controller and load balancer service by default.
Our use case will be standing up K3s on a single node with an NGINX ingress controller and cert-manager. We might not have a public IP address to associate with our cluster, but we want to start testing internally and using ingress and certificates so we can move to the cloud. We will end up using self-signed certificates for testing and will use a hosts file for DNS. This could easily be expanded to internal DNS and a certificate authority.
There are a few basic requirements:
- The ability to run K3s (https://docs.k3s.io/installation/requirements)
- kubectl is installed on your machine (https://kubernetes.io/docs/tasks/tools/)
Virtual Cluster Architecture
NOTE: If you already have a working Kubernetes cluster with ingress and certificate management, then skip ahead to the next vcluster section in this post.
Install
K3s
The K3s installation quick-start guide is a great place to start. For our installation, we will modify the install script so we can disable Traefik. We are going to install an NGINX ingress controller, which will be easier to follow the examples in our documentation.
Start the installation with the command below on the server or VM where you want to run K3s:
curl -sfL https://get.k3s.io | sh -s - --disable traefik
The output should look something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
``` $ curl -sfL https://get.k3s.io | sh -s - --disable traefik [INFO] Finding release for channel stable [INFO] Using v1.25.6+k3s1 as release [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/sha256sum-amd64.txt [INFO] Skipping binary downloaded, installed k3s matches hash [INFO] Skipping installation of SELinux RPM [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, already exists [INFO] Skipping /usr/local/bin/crictl symlink to k3s, already exists [INFO] Skipping /usr/local/bin/ctr symlink to k3s, already exists [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s ``` |
The kubeconfig information can be found on the server where K3s was installed:
/etc/rancher/k3s/k3s.yaml
Copy this information and set it as your current config for kubectl. If you aren’t using other clusters, you could just copy and paste this into the config file in your home directory on your local machine:
.kube/config
By default, the server may be listed as server: https://127.0.0.1:6443
, which will need to be updated to the IP address or hostname of the node where it was installed. In our demo, it’s going to look something like this: server: server: https://k3s.domain.com:6443
with k3s.domain.com (replace domain.com with your internal domain) pointing to 192.168.86.9
. (Your IP will be different.)
The great thing about K3s is that it deploys with the ability to create the load balancer service. This will make configuring ingress a lot easier, and we can share the same IP address across multiple hostnames.
From this point forward we can start running commands on our local machine instead of working from the server.
NGINX
Now that we have a cluster running, we need to add an ingress controller. By default, K3s will deploy with Traefik; however, most of our examples are using NGINX.
To install the latest version of NGINX, run the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
NOTE: It’s better to run something like this using Helm or other methods. For this guide, we are trying to get going as fast as possible to test out vcluster and see if it fits our use case.
For additional ways to deploy the NGINX Ingress Controller check out: https://kubernetes.github.io/ingress-nginx/deploy/#quick-start.
The resources installed into our cluster will look something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
``` $ kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/ingress-nginx-admission-create-pspdd 0/1 Completed 0 163m pod/ingress-nginx-admission-patch-hgcjc 0/1 Completed 1 163m pod/ingress-nginx-controller-854d597f86-zdq4n 1/1 Running 0 131m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller-admission ClusterIP 10.43.53.165 <none> 443/TCP 163m service/ingress-nginx-controller LoadBalancer 10.43.96.124 192.168.86.9 80:31605/TCP,443:31857/TCP 163m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 163m NAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-854d597f86 1 1 1 131m replicaset.apps/ingress-nginx-controller-8574b6d7c9 0 0 0 163m NAME COMPLETIONS DURATION AGE job.batch/ingress-nginx-admission-create 1/1 11s 163m job.batch/ingress-nginx-admission-patch 1/1 12s 163m ``` |
The most important part of this is the load balancer service. This is what we will use for our DNS records. Everything is going to point to 192.168.86.9
in our examples.
The single record can be pulled with the following command:
1 2 3 4 5 6 |
``` $ kubectl get service -n ingress-nginx ingress-nginx-controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.43.96.124 192.168.86.9 80:31605/TCP,443:31857/TCP 165m ``` |
There is one additional update that we need to make so NGINX will work correctly with vcluster. NGINX needs to start with the --enable-ssl-passthrough
option enabled. To do this, we can edit the deployment:
kubectl edit deploy -n ingress-nginx ingress-nginx-controller
Add - --enable-ssl-passthrough
to the - args:
section in the container spec:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
``` spec: containers: - args: - /nginx-ingress-controller - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller - --election-id=ingress-nginx-leader - --controller-class=k8s.io/ingress-nginx - --ingress-class=nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key - --enable-ssl-passthrough ``` |
Save and exit. If using VI that would be :wq!
.
Cert-Manager
We need a way to get certificates created so we can use TLS. We are going to install Cert-Manager via the manifest instead of using Helm for this demo. When you start doing this in production, I would recommend using Helm. Cloud providers may offer a way to get certificates outside of Cert-Manager, so this may not be required based on your provider.
To install the latest version run the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml
Now that the Cert-Manager CRD has been installed, we need to add one more thing, a ClusterIssuer. A ClusterIssuer is easier to manage within a cluster that requires certificates across multiple namespaces. When you move to production you may have requirements to use an Issuer instead so you can manage how certificates are created within each namespace as they are scoped to a single namespace.
1 2 3 4 5 6 7 8 |
``` cluster-issuer.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: selfsigned-cluster-issuer spec: selfSigned: {} ``` |
kubectl create -f cluster-issuer.yaml
We can verify that the resource was created with:
1 2 3 4 5 |
``` $ kubectl get clusterissuer NAME READY AGE selfsigned-cluster-issuer True 165m ``` |
vcluster
CLI
Let’s start out by installing the vcluster CLI:
https://www.vcluster.com/docs/getting-started/setup
In the case of my demo, I’m using Apple silicon so I would run this on my desktop:
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
Test out the CLI to make sure it is working:
vcluster --version
Cluster Deployment
Ingress Resource
To start out we need to create a namespace for vcluster. In our example, we will use my-vcluster
, but this is where you will start naming resources based on who is using it, a project name or other labels you may need so that you can better track who is using what.
kubectl create namespace my-vcluster
Now, we should have everything installed that is required for vcluster + ingress.
To start out, we need to configure an ingress resource on the base cluster that will provide a way to get traffic to our vcluster API.
Since we are running on K3s and have installed Cert-Manager, we need to update the ingress.yaml
file shown in the guide above. In the example below, we are referencing the cluster-issuer so we know where to get our TLS certificate. This was added in the Cert-Manager section.
The domain we are using will need to be updated. In our example, we are using my-vcluster.loft.local
. The ingress resource is being deployed to the namespace my-vcluster
. If you are using a different namespace, then update this value in the YAML and save it. The secretName will store information for the certificate so it can be named whatever you would like.
Here is what we will use on our cluster:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
``` ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: "selfsigned-cluster-issuer" nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" name: vcluster-ingress namespace: my-vcluster spec: tls: - hosts: - my-vcluster.loft.local secretName: vcluster-key ingressClassName: nginx rules: - host: my-vcluster.loft.local http: paths: - backend: service: name: my-vcluster port: number: 443 path: / pathType: ImplementationSpecific ``` |
Create the resource with the following command:
kubectl apply -f ingress.yaml
We can verify the information with the following command and output:
1 2 3 4 5 |
``` $ kubectl get ingress -n my-vcluster NAME CLASS HOSTS ADDRESS PORTS AGE vcluster-ingress nginx my-vcluster.loft.local 192.168.86.9 80, 443 118m ``` |
The ingress resource has been configured using the same load balancer IP address that we used for the ingress controller and it has the hostname my-vcluster.loft.local
.
DNS
At this point, we need to configure DNS. In this example, we will update /etc/hosts
to point to our ingress controller for the hostname we are using. In our case, this will look like an A record. For some cloud providers, it will end up being a CNAME. My record looks something like this:
1 2 3 4 |
``` # vcluster 192.168.86.9 my-vcluster.loft.local ``` |
Values File
There are a lot of available options that can be configured at the time of cluster creation. Two will be used for this deployment. If you have additional requirements, the options can be found here.
Our values file will look like this:
1 2 3 4 5 6 7 8 |
``` values.yaml syncer: extraArgs: - --tls-san=my-vcluster.loft.local sync: ingresses: enabled: true ``` |
For ingress, we need to use the extraArgs option. Then we are using the Sync option to sync ingress resources with the base server ingress.
Save the values file as values.yaml
.
Vcluster Create
Now we can create the cluster.
vcluster create my-vcluster -n my-vcluster --connect=false -f values.yaml
The --upgrade
flag is also available in case you want to modify the values.yaml
and make changes. Output will look something like this:
1 2 3 4 5 6 7 8 |
``` $ vcluster create my-vcluster -n my-vcluster --connect=false --upgrade -f values.yaml info Upgrade vcluster my-vcluster... info execute command: helm upgrade my-vcluster /var/folders/4b/kybhplt13jv0rfg3ytj0x5080000gn/T/vcluster-0.14.0.tgz-1580888850 --kubeconfig /var/folders/4b/kybhplt13jv0rfg3ytj0x5080000gn/T/3790892606 --namespace my-vcluster --install --repository-config='' --values /var/folders/4b/kybhplt13jv0rfg3ytj0x5080000gn/T/2098591639 --values values.yaml done √ Successfully created virtual cluster my-vcluster in namespace my-vcluster. - Use 'vcluster connect my-vcluster --namespace my-vcluster' to access the virtual cluster - Use `vcluster connect my-vcluster --namespace my-vcluster -- kubectl get ns` to run a command directly within the vcluster ``` |
Now that we have deployed the cluster, we can grab the kubeconfig so we can interact with it. In the example below, we are using the cluster hostname we created in our ingress resource.
vcluster connect my-vcluster -n my-vcluster --update-current=false --server=https://my-vcluster.loft.local
A working configuration will have output that looks something like this:
1 2 3 4 5 |
``` $ vcluster connect my-vcluster -n my-vcluster --update-current=false --server=https://my-vcluster.loft.local done √ Virtual cluster kube config written to: ./kubeconfig.yaml - Use `kubectl --kubeconfig ./kubeconfig.yaml get namespaces` to access the vcluster ``` |
We can see the separation of our clusters by using the different configuration files:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
```vcluster $ kubectl --kubeconfig ./kubeconfig.yaml get namespaces NAME STATUS AGE default Active 134m kube-system Active 134m kube-public Active 134m kube-node-lease Active 134m ``` ```K3S kubectl get namespaces NAME STATUS AGE default Active 3h27m kube-system Active 3h27m kube-public Active 3h27m kube-node-lease Active 3h27m ingress-nginx Active 3h22m cert-manager Active 3h16m my-vcluster Active 134m ``` |
At this point, we can interact with our vcluster cluster using kubectl with --kubeconfig
or we can update our KubeConfig to point to this file. For your users, they will more than likely stop interacting with the base cluster at this point and will export their KubeConfig to point to this configuration.
export KUBECONFIG=./kubeconfig.yaml
Application Deployment
Now that we have a working vcluster, it is time to deploy an application into it and see how everything works. Our application will include a deployment, service and ingress resource.
Since we are using /etc/hosts
for DNS, we will need to create another record for this application.
Here are the updated records for my /etc/hosts
file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
``` # vcluster 192.168.86.9 hello.my-vcluster.loft.local 192.168.86.9 my-vcluster.loft.local ``` ``` application.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-world labels: app: hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: gcr.io/google-samples/node-hello:1.0 ports: - containerPort: 8080 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-world annotations: spec: ingressClassName: nginx tls: - hosts: - hello.my-vcluster.loft.local secretName: hello-kubernetes-tls rules: - host: hello.my-vcluster.loft.local http: paths: - pathType: Prefix path: "/" backend: service: name: hello-world port: number: 80 apiVersion: v1 kind: Service metadata: name: hello-world spec: ports: - port: 80 targetPort: 8080 selector: app: hello-world ``` |
Save this file as application.yaml
and deploy with:
kubectl --kubeconfig ./kubeconfig.yaml apply -f application.yaml
Let’s see if it works:
1 2 3 4 |
``` $ curl -k https://hello.my-vcluster.loft.local/ Hello Kubernetes! ``` |
Success! We were able to get the application running; ingress is working, and our developer is able to view and test their application.
Separation Between Virtual Cluster and K3s Cluster
We can differentiate between the K3s cluster and vcluster by defining the kubeconfig in the examples so we can see where everything lives. Here is what we see when we run get all
. This doesn’t actually show all resources such as ingress but it will show us pods, services and deployments.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
``` vcluster $ kubectl --kubeconfig ./kubeconfig.yaml get all NAME READY STATUS RESTARTS AGE pod/hello-world-5f66f68869-m8ztg 1/1 Running 0 119m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.43.98.146 <none> 443/TCP 144m service/hello-world ClusterIP 10.43.164.9 <none> 80/TCP 119m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/hello-world 1/1 1 1 119m NAME DESIRED CURRENT READY AGE replicaset.apps/hello-world-5f66f68869 1 1 1 119m ``` ``` K3s $ k get all -n my-vcluster NAME READY STATUS RESTARTS AGE pod/coredns-56d44fc4b4-v5b5b-x-kube-system-x-my-vcluster 1/1 Running 0 144m pod/my-vcluster-0 2/2 Running 0 137m pod/hello-world-5f66f68869-m8ztg-x-default-x-my-vcluster 1/1 Running 0 120m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-vcluster ClusterIP 10.43.98.146 <none> 443/TCP,10250/TCP 145m service/my-vcluster-headless ClusterIP None <none> 443/TCP 145m service/kube-dns-x-kube-system-x-my-vcluster ClusterIP 10.43.6.75 <none> 53/UDP,53/TCP,9153/TCP 144m service/hello-world-x-default-x-my-vcluster ClusterIP 10.43.164.9 <none> 80/TCP 120m NAME READY AGE statefulset.apps/my-vcluster 1/1 145m ``` |
Now, let’s take a look at the ingress resources. Remember that we created the ingress resource and other resources while scoped to the vcluster kubeconfig. The information will still sync to the main cluster so your developers can use the same ingress controller for their applications. In the example below, we see the ingress resource for hello-world within vcluster, but on the K3s cluster we see both the ingress controller for vcluster as well as a version of the hello-world ingress resource.
1 2 3 4 5 6 7 8 9 10 11 12 |
``` vcluster $ kubectl --kubeconfig ./kubeconfig.yaml get ingress NAME CLASS HOSTS ADDRESS PORTS AGE hello-world nginx hello.my-vcluster.loft.local 192.168.86.9 80, 443 122m ``` ``` K3S $ kubectl get ingress -n my-vcluster NAME CLASS HOSTS ADDRESS PORTS AGE vcluster-ingress nginx my-vcluster.loft.local 192.168.86.9 80, 443 144m hello-world-x-default-x-my-vcluster nginx hello.my-vcluster.loft.local 192.168.86.9 80, 443 122m ``` |
Use Cases
Secure multitenancy, cluster scaling and cluster simulations are a few of the use cases for virtual clusters. There are many others, some we may not have even considered. Reach out to us on Slack if you have an interesting use case that you would like to share.
Conclusion
This is a fun project to try out if you want to see how vcluster works, or turn an unused server into a quick development cluster. Most of this configuration is trivialized, but also more opinionated when using a cloud provider.
If you want to take the next steps and get this running somewhere else, check out our documentation or YouTube channel. If you have questions, join us on Slack.