Modal Title
Kubernetes

5 Best Practices for Configuring Kubernetes Pods Running in Production

How to tune Kubernetes to get the most out of your production workloads.
Jan 10th, 2020 9:59am by
Featued image for: 5 Best Practices for Configuring Kubernetes Pods Running in Production
Feature image via Pixabay.
Editor’s Note: Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

A pod in Kubernetes represents the fundamental deployment unit. It may contain one or more containers packaged and deployed as a logical entity. A cloud native application running in Kubernetes may contain multiple pods mapped to each microservice. Pods are also the unit of scaling in Kubernetes.

Here are five best practices to follow before deploying pods in Kubernetes. Even though there are other configurations that may be applied, these are the most essential practices that bring basic hygiene to cloud native applications.

1) Choose the Most Appropriate Kubernetes Controller

While it may be tempting to deploy and run a container image as a generic pod, you should select the right controller type based on the workload characteristics. Kubernetes has a primitive called the controller which aligns with the key characteristic of the workload. Deployment, StatefulSet, and DaemonSet are the most often used controllers in Kubernetes.

When deploying stateless pods, always use the deployment controller. This brings PaaS-like capabilities to pods through scaling, deployment history, and rollback features. When a deployment is configured with a minimum replica count of two, Kubernetes ensures that at least two pods are always running which brings fault tolerance. Even when deploying the pod with just one replica, it is highly recommended that you use a deployment controller instead of a plain vanilla pod specification.

For workloads such as database clusters, a StatefulSet controller will create a highly available set of pods that have a predictable naming convention. Stateful workloads such as Cassandra, Kafka, ZooKeeper, and SQL Server that need to be highly available are deployed as StatefulSets in Kubernetes.

When you need to run a pod on every node of the cluster, you should use the DaemonSet controller. Since Kubernetes automatically schedules a DaemonSet in newly provisioned worker nodes, it becomes an ideal candidate to configure and prepare the node for the workload. For example, if you want to mount an existing NFS or Gluster file share on the node before deploying the workload, package and deploy the pod as a DaemonSet.

Make sure you are choosing the most appropriate controller type before deploying pods.

2) Configure Health Checks for Pods

By default, all the running pods have the restart policy set to always which means the kubelet running within a node will automatically restart a pod when the container encounters an error.

Health checks extend this capability of kubelet through the concept of container probes. There are three probes that can be configured for each pod — Readiness, liveness, and startup.

You would have encountered a situation where the pod is in running state but the ready column shows 0/1. This indicates that the pod is not ready to accept requests. A readiness probe ensures that the prerequisites are met before starting the pod. For example, a pod serving a machine learning model needs to download the latest version of the model before serving the inference. The readiness probe will constantly check for the presence of the file before moving the pod to the ready state. Similarly, the readiness probe in a CMS pod will ensure that the datastore is mounted and accessible.

The liveness probe will periodically check the health of the container and report to the kubelet. When this health check fails, the pod will not receive the traffic. The service will ignore the pod until the liveness probe reports a positive state. For example, a MySQL pod may include a liveness probe that continuously checks the state of the database engine.

The startup probe which is still in alpha as of version 1.16, allows containers to wait for longer periods before handing over the health check to the liveness probe. This is helpful when porting legacy applications to Kubernetes that take unusual startup times.

All the above health checks can be configured with commands, HTTP probes, and TCP probes.

Refer to the Kubernetes documentation on the steps to configure health checks.

3) Make use of an Init Container to Prepare the Pod

There are scenarios where the container needs initialization before becoming ready. The initialization can be moved to another container to does the groundwork before the pod moves to a ready state. An init container can be used to download files, create directories, change file permissions, and more.

An init container can even be used to ensure that the pods are started in a specific sequence. For example, an Init Container will wait till the MySQL pod becomes available before starting the WordPress pod.

A pod may contain multiple init containers with each container performing a specific initializing task.

4) Apply Node/Pod Affinity and Anti-Affinity Rules

Kubernetes scheduler does a good job of placing the pods on associated nodes based on the resource requirements of the pod and resource consumption within the cluster. However, there may be a need to control the way pods are scheduled on nodes. Kubernetes provides two mechanisms — Node Affinity/anti-affinity and pod affinity/anti-affinity.

Node affinity extends the already powerful nodeSelector rule to cover additional scenarios. Like the way Kubernetes annotations make labels/selectors more expressive and extensible, node affinity makes nodeSelector more expressive through additional rules. Node affinity will ensure that pods are scheduled on nodes that meet specific criteria. For example, a stateful database pod can be forced to be scheduled on a node that has an SSD attached. Similarly, node anti-affinity will help in avoiding scheduling the pods on the nodes that may cause issues.

While node affinity does matchmaking between pods and nodes, there may be scenarios where you need to co-locate pods for performance or compliance. Pod affinity will help us place pods that need to share the same node. For example, an Nginx web server pod must be scheduled on the same node that has a Redis pod. This will ensure low latency between the web app and the cache. In other scenarios, you may want to avoid running two pods on the same node. When deploying HA workloads, you may want to force that no two instances of the same pod run on the same node. Pod anti-affinity will enforce rules that prevent this possibility.

Analyze your workload to assess if you need to utilize node and pod affinity strategies for the deployments.

5) Take Advantage of Auto Scalers

Hyperscale cloud platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform have built-in auto-scaling engines that can scale-in and scale-out a fleet of VMs based on the average resource consumption or external metrics.

Kubernetes has similar auto-scaling capabilities for the deployments in the form of horizontal pod autoscaler (HPA), vertical pod autoscaler (VPA), and cluster auto-scaling.

Horizontal pod autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization. HPA is represented as an object within Kubernetes which means it can be declared through a YAML file controlled via the kubectl CLI. Similar to the IaaS auto-scaling engines, HPA supports defining the CPU threshold, min and max instances of a pod, cooldown period and even custom metrics.

Vertical pod autoscaling removes the guesswork involved in defining the CPU and memory configurations of a pod. This autoscaler has the ability to recommend appropriate values for CPU and memory requests and limits, or it can automatically update the values. The auto-update flag will decide if existing pods will be evicted or continue to run with the old configuration. Querying the VPA object will show the optimal CPU and memory requests through upper and lower bounds.

While HPA and VPA scale the deployments and pods, Cluster Autoscaler will expand and shrink the size of the pool of worker nodes. It is a standalone tool to adjust the size of a Kubernetes cluster based on the current utilization. Cluster Autoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when adding a new node would increase the overall availability of cluster resources. Behind the scenes, Cluster Autoscaler negotiates with the underlying IaaS provider to add or remove nodes. Combining HPA with Cluster Autoscaler delivers maximum performance and availability of workloads.

In the upcoming tutorials, I will cover each of the best practices in detail with use cases and scenarios. Stay tuned.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.