TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
DevOps / Kubernetes / Software Development

Vcluster to the Rescue

A guide to standing up a quick development environment on K3s
Apr 4th, 2023 11:00am by
Featued image for: Vcluster to the Rescue

The hype has kicked in and you have finally created a Kubernetes cluster on your favorite cloud provider. A couple of developers have started using it, but everyone is deploying to default and it is starting to get cluttered. The first request comes in from a developer who wants their own cluster. That is usually how the sprawl begins.

Before you know it, you are spending large amounts of money on clusters, storage and other services offered by the provider. When you get large enough, you may start hitting the soft limits used by cloud providers to keep accounts from growing too large. There has to be a better way.

Is There a Solution that Works for Everyone?

A few years ago, I saw this first-hand. The cost was becoming enormous and the ability to manage the cluster life cycle across hundreds of clusters was basically impossible. Everyone was working on a different version of Kubernetes, and there were at least five clusters named “my-cluster” created every week. We had production-level spend on development clusters that were primarily used to support open source projects.

Our team had discussions about what we could do. Should we create a large cluster and place everyone in their own namespace? Should we have a cluster per team and let them manage it? Should we recommend using Kind and have everyone run their testing locally? We ended up doing a mix, but it didn’t really fix the problem. This was 2020, and we didn’t have some of the tools available now.

Vcluster to the Rescue

Enter vcluster, which makes multitenancy a lot easier. Our developers want to feel like they have their own cluster while our platform engineers want to manage as few clusters as possible. There will be a namespace per developer or team, and they will have the ability to deploy virtual clusters, which will appear to them as their own cluster. Now, when everything is deployed to the default namespace, there won’t be overlap.

Getting Started and Requirements

In this article, we are going to turn a Linux server into a K3s-based Kubernetes cluster, install vlucster and have a working development environment that can be used for testing. Most of the concepts we discuss will translate to cloud providers. In fact, most of the cloud providers make many of the steps easier by providing you with an ingress controller and load balancer service by default.

Our use case will be standing up K3s on a single node with an NGINX ingress controller and cert-manager. We might not have a public IP address to associate with our cluster, but we want to start testing internally and using ingress and certificates so we can move to the cloud. We will end up using self-signed certificates for testing and will use a hosts file for DNS. This could easily be expanded to internal DNS and a certificate authority.

There are a few basic requirements:

Virtual Cluster Architecture

NOTE: If you already have a working Kubernetes cluster with ingress and certificate management, then skip ahead to the next vcluster section in this post.

Install

K3s

The K3s installation quick-start guide is a great place to start. For our installation, we will modify the install script so we can disable Traefik. We are going to install an NGINX ingress controller, which will be easier to follow the examples in our documentation.

Start the installation with the command below on the server or VM where you want to run K3s:

curl -sfL https://get.k3s.io | sh -s - --disable traefik

The output should look something like this:


The kubeconfig information can be found on the server where K3s was installed:

/etc/rancher/k3s/k3s.yaml

Copy this information and set it as your current config for kubectl. If you aren’t using other clusters, you could just copy and paste this into the config file in your home directory on your local machine:

.kube/config

By default, the server may be listed as server: https://127.0.0.1:6443, which will need to be updated to the IP address or hostname of the node where it was installed. In our demo, it’s going to look something like this: server: server: https://k3s.domain.com:6443 with k3s.domain.com (replace domain.com with your internal domain) pointing to 192.168.86.9. (Your IP will be different.)

The great thing about K3s is that it deploys with the ability to create the load balancer service. This will make configuring ingress a lot easier, and we can share the same IP address across multiple hostnames.

From this point forward we can start running commands on our local machine instead of working from the server.

NGINX

Now that we have a cluster running, we need to add an ingress controller. By default, K3s will deploy with Traefik; however, most of our examples are using NGINX.

To install the latest version of NGINX, run the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml

NOTE: It’s better to run something like this using Helm or other methods. For this guide, we are trying to get going as fast as possible to test out vcluster and see if it fits our use case.

For additional ways to deploy the NGINX Ingress Controller check out: https://kubernetes.github.io/ingress-nginx/deploy/#quick-start.

The resources installed into our cluster will look something like this:


The most important part of this is the load balancer service. This is what we will use for our DNS records. Everything is going to point to 192.168.86.9 in our examples.

The single record can be pulled with the following command:


There is one additional update that we need to make so NGINX will work correctly with vcluster. NGINX needs to start with the --enable-ssl-passthrough option enabled. To do this, we can edit the deployment:

kubectl edit deploy -n ingress-nginx ingress-nginx-controller

Add - --enable-ssl-passthrough to the - args: section in the container spec:


Save and exit. If using VI that would be :wq!.

Cert-Manager

We need a way to get certificates created so we can use TLS. We are going to install Cert-Manager via the manifest instead of using Helm for this demo. When you start doing this in production, I would recommend using Helm. Cloud providers may offer a way to get certificates outside of Cert-Manager, so this may not be required based on your provider.

To install the latest version run the following command:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml

Now that the Cert-Manager CRD has been installed, we need to add one more thing, a ClusterIssuer. A ClusterIssuer is easier to manage within a cluster that requires certificates across multiple namespaces. When you move to production you may have requirements to use an Issuer instead so you can manage how certificates are created within each namespace as they are scoped to a single namespace.


kubectl create -f cluster-issuer.yaml

We can verify that the resource was created with:

vcluster

CLI

Let’s start out by installing the vcluster CLI:

https://www.vcluster.com/docs/getting-started/setup

In the case of my demo, I’m using Apple silicon so I would run this on my desktop:

curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster

Test out the CLI to make sure it is working:

vcluster --version

Cluster Deployment

Ingress Resource

To start out we need to create a namespace for vcluster. In our example, we will use my-vcluster, but this is where you will start naming resources based on who is using it, a project name or other labels you may need so that you can better track who is using what.

kubectl create namespace my-vcluster

Now, we should have everything installed that is required for vcluster + ingress.

To start out, we need to configure an ingress resource on the base cluster that will provide a way to get traffic to our vcluster API.

Since we are running on K3s and have installed Cert-Manager, we need to update the ingress.yaml file shown in the guide above. In the example below, we are referencing the cluster-issuer so we know where to get our TLS certificate. This was added in the Cert-Manager section.

The domain we are using will need to be updated. In our example, we are using my-vcluster.loft.local. The ingress resource is being deployed to the namespace my-vcluster. If you are using a different namespace, then update this value in the YAML and save it. The secretName will store information for the certificate so it can be named whatever you would like.

Here is what we will use on our cluster:


Create the resource with the following command:

kubectl apply -f ingress.yaml

We can verify the information with the following command and output:


The ingress resource has been configured using the same load balancer IP address that we used for the ingress controller and it has the hostname my-vcluster.loft.local.

DNS

At this point, we need to configure DNS. In this example, we will update /etc/hosts to point to our ingress controller for the hostname we are using. In our case, this will look like an A record. For some cloud providers, it will end up being a CNAME. My record looks something like this:

Values File

There are a lot of available options that can be configured at the time of cluster creation. Two will be used for this deployment. If you have additional requirements, the options can be found here.

Our values file will look like this:


For ingress, we need to use the extraArgs option. Then we are using the Sync option to sync ingress resources with the base server ingress.

Save the values file as values.yaml.

Vcluster Create

Now we can create the cluster.

vcluster create my-vcluster -n my-vcluster --connect=false -f values.yaml

The --upgrade flag is also available in case you want to modify the values.yaml and make changes. Output will look something like this:


Now that we have deployed the cluster, we can grab the kubeconfig so we can interact with it. In the example below, we are using the cluster hostname we created in our ingress resource.

vcluster connect my-vcluster -n my-vcluster --update-current=false --server=https://my-vcluster.loft.local

A working configuration will have output that looks something like this:


We can see the separation of our clusters by using the different configuration files:


At this point, we can interact with our vcluster cluster using kubectl with --kubeconfig or we can update our KubeConfig to point to this file. For your users, they will more than likely stop interacting with the base cluster at this point and will export their KubeConfig to point to this configuration.

export KUBECONFIG=./kubeconfig.yaml

Application Deployment

Now that we have a working vcluster, it is time to deploy an application into it and see how everything works. Our application will include a deployment, service and ingress resource.

Since we are using /etc/hosts for DNS, we will need to create another record for this application.

Here are the updated records for my /etc/hosts file:


Save this file as application.yaml and deploy with:

kubectl --kubeconfig ./kubeconfig.yaml apply -f application.yaml

Let’s see if it works:


Success! We were able to get the application running; ingress is working, and our developer is able to view and test their application.

Separation Between Virtual Cluster and K3s Cluster

We can differentiate between the K3s cluster and vcluster by defining the kubeconfig in the examples so we can see where everything lives. Here is what we see when we run get all. This doesn’t actually show all resources such as ingress but it will show us pods, services and deployments.


Now, let’s take a look at the ingress resources. Remember that we created the ingress resource and other resources while scoped to the vcluster kubeconfig. The information will still sync to the main cluster so your developers can use the same ingress controller for their applications. In the example below, we see the ingress resource for hello-world within vcluster, but on the K3s cluster we see both the ingress controller for vcluster as well as a version of the hello-world ingress resource.

Use Cases

Secure multitenancy, cluster scaling and cluster simulations are a few of the use cases for virtual clusters. There are many others, some we may not have even considered. Reach out to us on Slack if you have an interesting use case that you would like to share.

Conclusion

This is a fun project to try out if you want to see how vcluster works, or turn an unused server into a quick development cluster. Most of this configuration is trivialized, but also more opinionated when using a cloud provider.

If you want to take the next steps and get this running somewhere else, check out our documentation or YouTube channel. If you have questions, join us on Slack.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.