Virtual Kubernetes Clusters with Spectro Cloud Palette
What application developers need from Kubernetes resources is pretty simple:
- Access to an unrestricted K8s sandbox with the same heavy-duty compute power, container network interface, (CNI), container storage interface (CSI) driver, and cloud controller manager (CCM) as their production K8s environments
- The freedom to deploy what they want, when they want, without waiting for approvals and fulfillment of internal provisioning tickets.
Yet many developers today don’t have this. What’s more, they feel constrained by the security processes imposed by their organizations, especially around Kubernetes provisioning and role-based access control (RBAC), where they face contention over API versions during custom resource-heavy development.
As a platform engineer, it’s your challenge to solve this conundrum. Dedicated clusters, kind clusters and namespaces are each imperfect answers (as we’ll see in a moment), but there is now a fourth way!
Keep reading to find out how Spectro Cloud Palette’s new Nested Cluster feature solves these problems, without compromising on security or visibility.
Why Is It so Difficult to Access Kubernetes Clusters?
Today, most developers get access to Kubernetes clusters in one of three ways — and none of them is ideal.
A local kind cluster is easy to deploy, but achieving consistent configuration between your local setup and a production environment is not always possible. Considerations such as secret management, ingress controllers, load balancers, network security policies and resource limitations will all come into play. Additionally, local Kubernetes clusters cannot be shared with multiple team members for collaboration.
Access to a namespace from a cluster maintained by a platform engineering team brings the cluster under enterprise control, but runs into limitations around tenancy and logistics. The soft multitenancy model inherent to the namespace approach can’t handle multiple versions of the same custom resource definition (CRD) and doesn’t provide hard isolation when it comes to certain operators and other cluster-scoped resources. Lastly, managing RBAC in this scenario can become onerous, causing procedural inefficiencies and friction.
One dedicated cluster per developer is another alternative. But this approach quickly becomes too expensive, as most developers will leave their cluster running 24×7. Many developer clusters will also require considerable multicluster management overhead to ensure consistency and to keep everything up-to-date and secure.
What are Nested Clusters?
And this is where Palette Nested Clusters come in. We’ve just introduced this feature as part of our Palette 3.0 announcement, which is all about the developer experience.
Nested Clusters are built on top of Loft Labs’ open source projects, vcluster and vcluster CAPI Provider, as core open source technology. We could go on and on about the virtues of vcluster, but here is a picture:
Essentially, vcluster relies on two core components: a syncer and a K8s control plane (typically a single-binary distribution such as K3s, although full CNCF K8s control planes are supported) to create a “virtual” Kubernetes cluster within a pre-existing host cluster.
The syncer does the heavy lifting of synchronizing K8s resources between the API servers of the two Kubernetes control planes (host and virtual). Typically, certain fundamental K8s primitives such as pods and services are always synchronized from the virtual Kubernetes cluster to the host. By default, the virtual cluster leverages the same container runtime interface (CRI) as the host.
Advanced scenarios are possible, however, where the virtual cluster uses its own CRI for maximum isolation. Exactly which K8s resources are synchronized — and in which direction(s), etc. — is highly configurable, thus enabling a wide array of use cases.
For example, imagine a scenario in which multiple teams are developing microservices that each rely on a set of shared services. One might deploy the foundational services only once on the Kubernetes host cluster and map them into each Kubernetes virtual cluster.
We’ll end our description of vcluster here, but we highly recommend you spend some time with Loft Labs’ docs.
Why use Palette Nested Clusters?
With Palette Nested Clusters, we’ve integrated vcluster with the Palette platform. Now your cluster administrator can deploy and manage vclusters with the same enterprise-grade orchestration, cluster-wide visibility, Day 2 operations and fine-grained RBAC that Palette provides for conventional K8s clusters. The outcomes you’ll see from using Nested Clusters are wide-ranging:
- Improve utilization: Kubernetes Nested Clusters enable your platform team to pack numerous virtual Kubernetes clusters onto a single host cluster. You can therefore maximize the utilization of your cloud resources without making compromises around K8s version, CRD versioning, RBAC, conflicting software stacks and application configuration etc.
- Faster “time to cluster” and superior developer experience: You can now offer your developer teams an experience very much like having their own dedicated clusters, with superior collaboration, only the “time to cluster” is much faster. And you can afford to do so much more liberally because the cost, effort and risk of firing up a Nested Cluster is so much lower. Overall, provisioning and onboarding developers to your K8s environments will become much less frustrating.
- Choice and flexibility: Palette Nested Clusters work with host clusters including distributions such as our own Palette eXtended Kubernetes (PXK), AKS/EKS/GKE, VMware Tanzu, Rancher RKE1/RKE2, and Google Anthos. We are working on adding OpenShift support for Nested Cluster. Additionally, they should work out of the box on any CNCF-conformant K8s distribution. We provide technical support for Nested Clusters built through Palette, too.
The best way to understand the value of Nested Clusters is to try them out, so keep reading for a guided demo.
Tutorial: Get Started with Nested Clusters
If you’d like to follow along, there are a few prerequisites:
- If you haven’t already, activate your free Palette account.
- Now, take a look at the Nested Cluster Overview.
- You’ll need to provision a host cluster and enable Nested Clusters on it. An EKS cluster or AKS cluster are both quick and easy options.
- Important Note: You must configure Ingress for accessibility if you want to follow along with the rest of the demo.
- Terraform 0.13+ installed locally.
- Cosign installed locally.
- Fork the demo Git repo and clone it locally.
Once you’re up and running Kubernetes, feel free to follow the Nested Cluster docs to provision your first Nested Cluster via the UI, it only takes a couple of clicks.
But here we’ll use Spectro Cloud’s Terraform provider to provision a Nested Cluster declaratively from the command line, because we know that’s how the real world works ;). Just to spice things up, once everything is online we’ll automatically deploy and configure Tekton and Tekton Chains inside our Nested Cluster and do some GitOps with a side of supply chain security).
Here’s a picture of what we’re going to do:
And here’s a more detailed explanation:
- Deploy a Nested Cluster using the Spectro Cloud Terraform provider.
- Simultaneously deploy Tekton Operator, Tekton Chains, and a handful of Tekton CRs.
- Push a commit to a particular git repository.
- Observe the cascade of events initiated by our commit:
- A webhook on the git repo (autogenerated by Tekton) will trigger a Tekton PipelineRun.
- The PipelineRun will execute a number of tasks that will clone the Git repo, build a Docker image from it and, lastly, deploy a pod using the newly built image.
- Tekton Chains will sign each TaskRun using the X.509 key that we provide.
- Use cosign to verify the signature on one of the Tekton TaskRuns.
- Tear everything down, including the Nested Cluster.
Step by Step
1. Clone Spectro Cloud’s Terraform provider repo and navigate to the end-to-end example directory
git clone <a href="https://github.com/spectrocloud/terraform-provider-spectrocloud">https://github.com/spectrocloud/terraform-provider-spectrocloud</a>
examples/e2e/nested directory contains all of the Terraform configuration files you’ll need to deploy a Nested Cluster with all of the Tekton components mentioned above included.
2. Fill in
terraform.tfvars.template and rename as
a. Note: for
external_domain, use the Host DNS Pattern you selected when configuring Ingress for accessibility on your host cluster.
Uncomment everything in
resource_clusterprofile.tf, as well as the three lines defining your Cluster Profile resource in
resource_cluster.tf file should look like this:
Now we’re almost ready to
terraform apply -auto-approve, sit back, and enjoy the show… but first we need to generate a Kubernetes secret containing an X.509 keypair for Tekton Chains.
The Tekton Chains Operator will use this key pair to sign each TaskRun that Tekton Pipelines executes (it adds the signatures as annotations on the TaskRun resources). Then, later on, we’ll be able to verify the signature of each TaskRun in our cluster using cosign.
generate_cosign_secret.sh from within the
4. OK, we made it! Go ahead and run
terraform apply -auto-approve
Within 10 seconds, you’ll see a new Cluster Profile appear in Palette. Let’s check it out:
Next, the Nested Cluster and all of the Tekton components will begin provisioning. This part takes around five minutes.
Wait for it… and voila! You’ve got a virtual Kubernetes cluster running inside of your host cluster. It contains a sizable collection of CRDs that the host cluster knows nothing about. You have cluster-admin privileges to do with it what you please, and the provisioning process took less than five minutes!
During the Nested Cluster deployment process, a number of Tekton resources are created. Although the inner workings of Tekton are not the focus of this post, here’s what gets created, broken out by Tekton component:
- Create Webhook: adds a webhook to the GitHub repo you configured in the Terraform setup (using your GitHub access token to do so).
- Create Ingress: creates an Ingress resource to route the callback URI defined in the GitHub webhook to the EventListener used by Tekton Triggers.
- Build / Deploy: a two-phase pipeline that first builds a Docker image from your Git repo using Kaniko, then creates a Pod that uses the newly built image.
PipelineRuns are created for the first two setup Pipelines automatically as part of the Cluster Profile configuration. The remaining other resources will remain idle, waiting for a git commit before springing into action :).
- EventListener, consisting of a TriggerTemplate and a TriggerBinding
- The EventListener parses the GitHub webhook callback and creates a PipelineRun for the Build/Deploy pipeline
5. Next, let’s initiate some GitOps magic, courtesy of Tekton:
git commit -a -m "trigger tekton pipeline" --allow-empty && git push origin master
The Tekon CRs — adapted from the Tekton Triggers and Tekton Chains tutorials — will now do their thing. Watch for the Build/Deploy PipelineRun to create a TaskRun. In turn, it will create the Tasks responsible for the build and deployment of an image from the demo ulmaceae repo.
OK, now that we’ve validated that our GitOps flow is working as expected, let’s ensure that Tekton Chains is also pulling its weight.
validate_taskrun.sh to validate the latest TaskRun using cosign
This helper script just automates the process of extracting the signature generated by Tekton Chains and validating it with cosign. You should see the following output:
7. Our whirlwind tour of Nested Clusters and Tekton is now complete. Go ahead and clean up using
Ready to Explore Nested Clusters Further?
We have only scratched the surface of what’s possible and the value that Nested Clusters can unlock in your Kubernetes environment.
At Spectro Cloud, we’re passionate about contributing to the open source community. Of course, we integrate core open source technologies like Cluster API and vcluster into our platform to turn them into enterprise-grade turnkey solutions. But we also enhance and contribute back to the upstream communities for any issues/features we see in real-world enterprise scenarios. And with projects like Kairos, a powerful tamperproof edge K8s engine, we’re contributing entire new projects to the community, too.
We truly hope that you found this tutorial helpful and that you learned something new. If you’d like to dive deeper into Nested Clusters and see some of the benefits in action, check out our webinar on Nov. 30. And in the meantime, if you have any questions, don’t hesitate to reach out via email or LinkedIn.