TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Kubernetes

How Does Kubernetes Work?

Aug 24th, 2020 9:00am by
Featued image for: How Does Kubernetes Work?
The following post is an excerpt from The New Stack’s new eBook, “The State of the Kubernetes Ecosystem.”

A contemporary application, packaged as a set of containers and deployed as microservices, needs an infrastructure robust enough to deal with the demands of clustering and the stress of dynamic orchestration. Such an infrastructure should provide primitives for scheduling, monitoring, upgrading and relocating containers across hosts. It must treat the underlying compute, storage and network primitives as a pool of resources. Each containerized workload should be capable of taking advantage of the resources exposed to it, including CPU cores, storage units and networks.

Kubernetes is an open source distributed system that abstracts the underlying physical infrastructure, making it easier to run containerized applications at scale. An application, managed through the entirety of its life cycle by Kubernetes, is composed of containers gathered together as a set and coordinated into a single unit. An efficient cluster manager layer lets Kubernetes effectively decouple this application from its supporting infrastructure, as depicted in the figure, below. Once the Kubernetes infrastructure is fully configured, DevOps teams can focus on managing the deployed workloads instead of dealing with the underlying resource pool — CPU and memory — which is handled by Kubernetes.

Kubernetes Works Like an Operating System

Kubernetes is an example of a well-architected distributed system. It treats all the machines in a cluster as a single pool of resources. It takes up the role of a distributed operating system by effectively managing the scheduling, allocating the resources, monitoring the health of the infrastructure, and even maintaining the desired state of infrastructure and workloads. Kubernetes is an operating system capable of running modern applications across multiple clusters and infrastructures on cloud services and private data center environments.

Like any other mature distributed system, Kubernetes has two layers consisting of the head nodes and worker nodes. The head nodes typically run the control plane responsible for scheduling and managing the life cycle of workloads. The worker nodes act as the workhorses that run applications. The collection of head nodes and worker nodes becomes a cluster.

Caption: The big picture of a Kubernetes cluster. Source: Janakiram MSV.

The DevOps teams managing the cluster talk to the control plane’s API via the command-line interface (CLI) or third-party tools. The users access the applications running on the worker nodes. The applications are composed of one or more container images that are stored in an accessible image registry.

Caption: The role of Head Node in Kubernetes architecture. Source: Janakiram MSV.

The Kubernetes Control Plane

The control plane runs the Kubernetes components that provide the core functionalities: exposing the Kubernetes API, scheduling the deployments of workloads, managing the cluster, and directing communications across the entire system. As depicted in the second diagram, the head monitors the containers running in each node as well as the health of all the registered nodes. Container images, which act as the deployable artifacts, must be available to the Kubernetes cluster through a private or public image registry. The nodes that are responsible for scheduling and running the applications access the images from the registry via the container runtime.

Caption: Components of the head node. Source: Janakiram MSV.

The Kubernetes head node runs the following components that form the control plane:

etcd

Developed by CoreOS, which was later acquired by Red Hat, etcd is a persistent, lightweight, distributed, key-value data store that maintains the cluster’s configuration data. It represents the overall state of the cluster at any given point of time, acting as the single source of truth. Various other components and services watch for changes to the etcd store to maintain the desired state of an application. That state is defined by a declarative policy; in effect, a document that states the optimum environment for that application, so the orchestrator can work to attain that environment. This policy defines how the orchestrator addresses the various properties of an application, such as the number of instances, storage requirements and resource allocation.

The etcd database is accessible only through the API server. Any component of the cluster which needs to read or write to etcd does it through the API server.

API Server

The API server exposes the Kubernetes API by means of JSON over HTTP, providing the representational state transfer (REST) interface for the orchestrator’s internal and external endpoints. The CLI, the web user interface (UI), or another tool may issue a request to the API server. The server processes and validates the request, and then updates the state of the API objects in etcd. This enables clients to configure workloads and containers across worker nodes.

Scheduler

The scheduler selects the node on which each workload should run based on its assessment of resource availability, and then tracks resource utilization to ensure the pod isn’t exceeding its allocation. It maintains and tracks resource requirements, resource availability, and a variety of other user-provided constraints and policy directives; for example, quality of service (QoS), affinity/anti-affinity requirements and data locality. An operations team may define the resource model declaratively. The scheduler interprets these declarations as instructions for provisioning and allocating the right set of resources to each workload.

Controller-Manager

The part of Kubernetes’ architecture which gives it its versatility is the controller-manager, which is a part of the head. The controller-manager’s responsibility is to ensure that the cluster maintains the desired state of application all the time through a well-defined controller. A controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.

The controller maintains the stable state of nodes and pods, by constantly monitoring the health of the cluster and the workloads deployed on that cluster. For example, when a node becomes unhealthy, the pods running on that node may become inaccessible. In such a case, it’s the job of the controller to schedule the same number of new pods in a different node. This activity ensures that the cluster is maintaining the expected state at any given point of time.

Kubernetes comes with a set of built-in controllers that run inside the controller-manager. These controllers offer primitives that are aligned with a certain class of workloads, such as stateless, stateful, scheduled cron jobs and run-to-completion jobs. Developers and operators can take advantage of these primitives while packaging and deploying applications in Kubernetes.

The next articles in this series explores Kubernetes architecture in more detail, including the key components of the worker nodes and workloads; services and service discovery; and networking and storage.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.