Fairwinds sponsored this post.
Earlier this year, Fairwinds released the Kubernetes Maturity Model to help people identify the progress they’ve made toward Kubernetes maturity. This model helps organizations at every level of Kubernetes adoption understand where they are and what’s next, from considering whether to adopt Kubernetes all the way through continuous optimization of mature environments.
Phases of the Kubernetes Maturity Model
The first phase of the Kubernetes Maturity Model really focuses on the technical transformation part of K8s adoption. In this stage, you’re starting your initial implementation and focusing on shifting workloads into Kubernetes. You may also be containerizing your applications if they aren’t already running in containers.
To successfully navigate this technical transformation, you’ll need to undertake 10 main steps. Keep in mind that the basic steps outlined below are a high-level overview of each step, and each step in this process takes a significant amount of time.
The 10 Steps
1. Take a Deep Dive into Your Tech Stack
Whether you’re deploying on-prem, in a data center or in the cloud, be sure to research all aspects of the stack. For example, are there any dependencies related to your configurations, security tools or applications that you need to consider when you move to Kubernetes?
This step is essential for determining your technology requirements and ensuring you don’t miss anything. It also helps you put together a project plan that you can use as a roadmap for migration.
2. Containerize Applications
You may have already containerized your applications; if so, skip to the next step. If not, it’s time to break down your application based on the 12-factor app methodology. Going through this methodology is important because you need to make sure your application is designed to live through destruction. (Don’t forget that your container may be killed at any time). You also need the ability to stand your application and containers back up cleanly. You should extract your secrets and configuration from your build artifact as part of this step.
Kubernetes is ephemeral, so extracting secrets and configuration will help you maintain your standards and security, then inject them at container runtime.
3. Build Cloud Infrastructure
If you’re not already operating in the cloud, it’s time to pick your cloud provider. Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure are all well-known and reputable options. Additionally, you may choose a managed Kubernetes service on one of those providers, such as Amazon Elastic Kubernetes Service (Amazon EKS), Google Kubernetes Engine (GKE) or Azure Kubernetes Service (AKS).
If you decide to go with a managed Kubernetes service, it will reduce the amount of work you need to do when building your Kubernetes infrastructure. You’ll also need to set up the underlying cloud configuration, virtual private cloud (VPC), security groups, authentication, authorization and so on.
4. Build Kubernetes Infrastructure
To avoid making choices that result in time-consuming cluster rebuilds or significant network and cost implications, you need to consider:
- How many clusters do you need? What regions do you need them in? How many availability zones (AZs) do you need?
- How many separate environments, clusters and namespaces are necessary?
- How will services communicate with or discover one another?
- How will security be handled at the VPC, cluster or pod level?
Make these decisions based on repeatability considerations. Use infrastructure-as-code (IaC) to build your clusters so you can do it over and over again. Be careful about your configuration options, also, and use the project plan you built based on your deep dive in Step 1. This will help make sure that you don’t miss any application requirements.
5. Write YAML or Helm Charts
Now is when you define your Kubernetes objects and deploy them into your cluster. You can write Kubernetes YAML files, but many people prefer to use Helm charts to deploy applications into Kubernetes. You need to write YAML or Helm charts for your deployments, configmaps, secrets and any special application requirements.
6. Plumb External Cloud Dependencies in
Your application will likely have external dependencies including databases, object stores and so on. It’s not a good idea for these dependencies to live in Kubernetes. Instead, manage your stateful dependencies outside Kubernetes.
For example, stand up databases in an Amazon Relational Database Service (Amazon RDS), then plumb it into Kubernetes. Then your application can run in a pod in Kubernetes and talk to those stateful dependencies.
7. Define Git Workflow
One of the major benefits of Kubernetes is that it allows you to deploy code in a repeatable way without human intervention. In a common workflow, you commit source code to a repository, generally via Git, which kicks off events and merges with branches that move those changes to a non-production cluster. Next, test and QA your code and merge it to the main branch. This deploys your code to staging or production. In this phase, you’re defining what your Git workflow looks like, so you know what happens in Kubernetes when a developer pushes code.
8. Build Your CI/CD Pipeline
After defining your Git workflow, it’s time to set up your continuous integration and continuous delivery (CI/CD) platform using automation tools such as Jenkins or CircleCI. This transforms your defined workflow into an actual build pipeline.
9. Deploy and Test in a Non-Production Environment
Once you complete steps 1-8, you’ll deploy to non-production. Here you’ll want to test that the application runs, has sufficient resources and limits, secrets are correctly configured, the application is accessible and it restarts if you kill your pod. This is your chance to kick the tires before moving to production. If you’re running a monolithic application, you can move through this stage quickly. If you’re deploying a microservices application architecture, complete steps 1-8 for each service and deploy to non-production. Once all services are in that environment, you can see how they work together. That way you can make sure that your application as a whole will work when you deploy to production.
10. Promote to Production
Once you’ve thoroughly tested your application in non-production, you can deploy your application to production. Make sure that your production environment is built the same way as your staging environment or you’ll run into issues down the road. To send traffic to your application, simply change your load balancer or DNS. With DNS, you can easily roll back if required.
During this technical transformation phase, part of the process includes cleaning up technical debt, making decisions around tooling, investigating productivity gains and losses, and beginning to look at flexibility and controls. Hopefully, these 10 steps will help you begin your technical transformation, a critical step for any organization adopting Kubernetes.
Featured image via Pixabay.