In addition to improving traditional DevOps processes — along with the speed, efficiency and resiliency commonly recognized as benefits of DevOps — Kubernetes solves new problems that arise with container and microservices-based application architectures. Said another way, Kubernetes reinforces DevOps goals while also enabling new workflows that arise with microservices architectures.
Kubernetes is a powerful, next generation, open-source platform for automating the deployment, scaling and management of application containers across clusters of hosts. It can run any workload. Kubernetes provides exceptional developer user experience (UX), and the rate of innovation is phenomenal. From the start, Kubernetes’ infrastructure promised to enable organizations to deploy applications rapidly at scale and roll out new features easily while using only the resources needed. With Kubernetes, organizations can have their own Heroku running in their own Google Cloud, AWS or on-premises environment.
In years past, think about how often development teams wanted visibility into operations deployments. Developers and operations teams have always been nervous about deployments because maintenance windows had a tendency to expand, causing downtime. Operations teams, in turn, have traditionally guarded their territory so no one would interfere with their ability to get their job done.
Then containerization and Kubernetes came along, and software engineers wanted to learn about it and use it. It’s revolutionary. It’s not a traditional operational paradigm. It’s software driven, and it lends itself well to tooling and automation. Kubernetes enables engineers to focus on mission-driven coding, not on providing desktop support. At the same time, it takes engineers into the world of operations, giving development and operations teams a clear window into each other’s worlds.
Here are seven features that make Kubernetes an ideal platform for DevOps engineers to set up and manage their containerized applications through Continuous Integration and Continuous Delivery (CI/CD) pipelines.
1. Powerful Building Blocks
Kubernetes uses Pods as the fundamental unit of deployment. Pods represent a group of one or more containers that use the same storage and network. Although pods are often used to run only a single container, they have been used in some creative ways, including as a means to build a service mesh.
A common use of multiple containers in a single pod follows a sidecar pattern. With this pattern, a container would run beside your core application to provide some additional value. This is commonly used for proxying requests, or even handling authentication.
With these powerful building blocks, it becomes quite straightforward to map services that may have been running in a virtual machine before containerization, into multiple containers running in the same pod.
2. Simplified Service Discovery
In one monolithic application, different services each have their own purpose, but self-containment facilitates communication. In a microservices architecture, microservices need to talk to each other — your user service needs to talk to your post service and address service and so on. Figuring out how these services can communicate simply and consistently is no easy feat.
With Kubernetes, a DevOps engineer defines a service — for example, a user service. Anything running in that same Kubernetes namespace can send a request to that service, and Kubernetes figures out how to route the request for you, making microservices easier to manage.
3. Centralized, Easily Readable Configuration
Kubernetes operates on a declarative model: You describe a desired state, and Kubernetes will try to achieve that state. Kubernetes has easily readable YAML files used to describe the state you want to achieve. With Kubernetes YAML configuration, you can define anything from an application load balancer to a group of pods to run your application. A deployment configuration might have three replicas of one of your applications’ Docker containers and two different environment variables. This easy-to-read configuration is likely stored in a Git repository, so you can see any time that the configuration changes. Before Kubernetes, it was hard to know what was actually happening with interconnected systems across servers.
In addition to configuring the application containers running in your cluster, or the endpoints that can be used to access them, Kubernetes can help with configuration management. Kubernetes has a concept called ConfigMap where you can define environment variables and configuration files for your application. Similarly, objects called secrets contain sensitive information and help define how your application will run. Secrets work much like ConfigMaps, but are more obscure and less visible to end users. Chapter 2 explores all of this in detail.
4. Real-Time Source of Truth
Manual and scripted releases used to be extremely stressful. You had one chance to get it right. With the built-in deployment power of Kubernetes, anybody can deploy and check on delivery status using Kubernetes’ unlimited deployment history: kubectl rollout history.
The Kubernetes API provides a real-time source of truth about deployment status. Any developer with access to the cluster can quickly find out what’s happening with the delivery or see all commands issued. This permanent system audit log is kept in one place for security and historical purposes. You can easily learn about previous deployments, see the delta between deployments or roll back to any of the listed versions.
5. Simple Health Check Capability
This is a huge deal in your application’s lifecycle, especially during the deployment phase. In the past, applications often had no automatic restart if they crashed; instead, someone got paged in the middle of the night and had to restart them. Kubernetes, on the other hand, has automatic health checks, and if an application fails to respond for any reason, including running out of memory or just locking up, Kubernetes automatically restarts it.
To clarify, Kubernetes checks that your application is running, but it doesn’t know how to check that it’s running correctly. However, Kubernetes makes it simple to set up health checks for your application. You can check the application’s health in two ways:
- Using a liveness probe that checks if an application goes from a healthy state to an unhealthy state. If it makes that transition, it will try to restart your application for you.
- Using a readiness probe that checks if an application is ready to accept traffic won’t get rid of previously working containers until the new containers are healthy. Basically, a readiness probe is a last line of defense that prevents a broken container from seeing the light of day.
Both probes are useful tools, and Kubernetes makes them easy to configure.
In addition, rollbacks are rare if you have a properly configured readiness probe. If all the health checks fail, a single one-line command will roll back that deployment for you and get you back to a stable state. It’s not commonly used, but it’s there if you need it.
6. Rolling Updates and Native Rollback
To build further off the idea of a real-time source of truth and health check capabilities, another key feature of Kubernetes is rolling updates with the aforementioned native rollback. Deployments can and should be frequent without fear of hitting a point of no return. Before Kubernetes, if you wanted to deploy something, a common deployment pattern involved the server pulling in the newest application code and restarting your application. The process was risky because some features weren’t backwards compatible — if something went wrong during the deployment, the software became unavailable. For example, if the server found new code, it would pull in those updates and try to restart the application with the new code. If something failed in that pipeline, the application was likely dead. The rollback procedure was anything but straightforward.
These workflows were problematic until Kubernetes. Kubernetes solves this problem with a deployment rollback capability that eliminates large maintenance windows and anxiety about downtime. Since Kubernetes 1.2, the deployment object is a declarative manifest containing everything that’s being delivered, including the number of replicas being deployed and the version of the software image. These items are abstracted and contained within a deployment declaration. Such manifest-based deployments have spurred new CD workflows and are an evolving best practice with Kubernetes.
Before Kubernetes shuts down existing application containers, it will start spinning up new ones. Only when the new ones are up and running correctly does it get rid of the old, stable release. Let’s say Kubernetes doesn’t catch a failed deployment — the app is running, but it’s in some sort of error state that Kubernetes doesn’t detect. In this case, DevOps engineers can use a simple Kubernetes command to undo that deployment. Furthermore, you can configure it to store as few as two changes or as many revisions as you want, and you can go back to the last deployment or many deployments earlier, all with an automated, simple Kubernetes command. This entire concept was a game-changer. Other orchestration frameworks don’t come close to handling this process in as seamless and logical a way as Kubernetes.
7. Simplified Monitoring
While on the surface it might seem that monitoring Kubernetes would be quite complex, there has been a lot of development in this space. Although Kubernetes and containers add some levels of complexity to your infrastructure, they also ensure that all your applications are running in consistent pods and deployments. This consistency enables monitoring tools to be simpler in many ways.
Prometheus is an example of an open source monitoring tool that has become very popular in the cloud-native ecosystem. This tool provides advanced monitoring and alerting capabilities, with excellent Kubernetes integrations.
When monitoring Kubernetes, there are a few key components to watch: Kubernetes Nodes (servers); Kubernetes system deployments, such as DNS or networking; and, of course, your application itself. There are many monitoring tools that will simplify monitoring each of these components.
The Cloud Native Computing Foundation, which manages Kubernetes, is a sponsor of The New Stack.