Learning Kubernetes: The Need for a Realistic Playground

Depending on a team’s experience, Kubernetes can either be a steep learning curve or refreshingly simple. Regardless of a team’s background, being able to rapidly and safely experiment within a Kubernetes playground is the key to becoming productive quickly.
From PaaS to K8s

If a development team is used to building and releasing applications via a platform-as-a-service (PaaS) such as Heroku or Cloud Foundry, the additional complexity that comes with Kubernetes can be troublesome. Gone are the simple abstractions, and deploying code is no longer an easy “git push heroku master.” I’ve heard some engineers use an analogy that moving from a PaaS to Kubernetes was like moving from traveling via train to driving yourself in a kit car that you have to assemble yourself from parts.
Teams with this type of experience need to be able to experiment with an application-ready Kubernetes cluster that they can quickly and repeatedly deploy services to and test and observe how user traffic will be handled. A key early goal here will be to establish the build pipeline deployment mechanisms and understand how the local developer experience maps to the remote deployment experience.
From VMs (and Duct Tape) to K8s
If an organization’s developers are used to building and deploying applications to infrastructure via a series of scripts (often with manual intervention) that configure VMs, networking, and other hardware, then Kubernetes can be a big win. Kubernetes has clear abstractions, such as Ingress, Pods and Services, and all of the configurations are driven from declarative configuration files. The integral control loop within Kubernetes (which can be extended via custom utilities that implement the “operator” pattern) also provides the continual “check and set” mechanism that prevents configuration drift.
Teams with this background often need to be able to deploy an experimental application-ready Kubernetes cluster that can be configured and customized to run alongside their existing infrastructure. A core goal here will be to be able to create clusters with varying configurations and rapidly deploy and test services to see if existing applications (and related developer workflows) can be easily shifted across.
A K8s Playground for Everyone
Regardless of the team’s experience, one thing that is clearly beneficial during the learning process is the need for a playground. Websites such as Katacoda, the Go playground, and the Open Policy Agent Rego playground have successfully pioneered this model of interactive learning in the cloud age.
However, a Kubernetes playground needs to fit into how an organization will deploy and operate this framework. Running Kubernetes on AWS vs GCP is a different experience, particularly in relation to getting user traffic into your cluster and security configuration. A playground must also be easy for engineers to create and destroy via self-service mechanisms.
Building and Maintaining a K8s Platform
Creating a Kubernetes playground is not as simple as pointing developers to the Minikube installation page, handing them a walkthrough script, and saying “have at it.” This unstructured approach can quickly lead to anarchy, with highly motivated engineers integrating all kinds of tooling into the (now snowflake) cluster, and engineers new to the space wondering where to begin.
The top three approaches to building and maintaining a Kubernetes playground include: leveraging existing K8s playground products, creating a bespoke playground using Helm Charts, and using Kubernetes initializers or environment quickstart services.
Using an Existing K8s Playground
There are several popular Kubernetes playground products, such as Katacoda and Play with Kubernetes. These typically offer the least amount of friction to get started and provide the most structured approach to training. This is often a great starting point for large-scale or enterprise use cases, where the goal is to quickly build the development and delivery “mental model” of the Kubernetes ecosystem.
The drawbacks with this approach include that these types of playgrounds are often the most restrictive in terms of ad hoc experimentation (and don’t allow engineers to learn via performing “off script” actions), and the underlying environments created are often not very configurable or production-like. Typically this type of playground is seen as a good first step that is later augmented with a more flexible and comprehensive platform.
Building a Bespoke Playground using Helm Charts
Defining and building a bespoke playground using Helm Charts is often combined with bootstrapping a cluster and integration with cloud services via infrastructure as code tooling such as Terraform and kudeadm. This provides engineers with much more scope in regards to the range of experimentation that can be conducted. It also provides flexible cluster configuration and repeatable builds — perfect for those moments when you accidentally destroy a cluster!
The challenge with this type of approach is the learning curve associated with creating the playground. Often this requires a team of platform “pioneers” to begin the Kubernetes learning journey months before the rest of the organization. These platform engineers also typically build some kind of UI- or CLI-driven platform configuration facade to enable self-service by engineers, otherwise everyone has to learn Helm at the same time as Kubernetes.
Using Kubernetes Initializers or Environment Quickstarts
Using Kubernetes initializers or environment quickstart services, such as the K8s Initializer, OpenShift Quickstart Templates, or RancherLabs Rio, can provide engineers with a production-like cluster quickly. Typically you answer a few questions via a UI or specify configuration properties in a simple file, crank the handle, and out pops a fully-formed K8s cluster ready for experimentation. This approach is popular as a “bridging” playground for engineers that have acquired the basic mental model of Kubernetes and want to experiment with a more application-ready or production-like experience, but without having to go all-in on committing to something like Helm.
The limitations of this approach include a potential bounded choice as to what tooling can be installed as part of the initialization process. And also the “day 2” support options or platform component upgrade paths can be somewhat limited.
Where to Begin?
Bringing in the experience of working with 1000s of engineers learning Kubernetes and related technologies, and also many super useful community discussions, I generally recommend folks create a playground by using a Kubernetes initializer or environment quickstart.
If you are using a “vanilla” upstream Kubernetes distribution, or a cloud-hosted offering, the initializer approach strikes a nice balance between the complete experience of learning to work with actual Kubernetes primitives (as opposed to virtualized in-browser playgrounds) without having to learn additional tooling to bootstrap the platform (e.g. Helm).
If you’ve already selected a platform that adds additional abstractions on top of the Kubernetes API (e.g. OpenShift), then this approach is also a no-brainer, as you get access to a production-like experience from day 0.
Wrapping Up
Providing a Kubernetes playground is essential for the learning journey associated with this framework. Kubernetes is a fantastic foundation for modern platforms, but as is the case when learning anything complicated, you need a safe playground that rewards experimentation and structured learning while minimizing the potential for any negative consequences.