The following post is an excerpt from The New Stack’s new eBook, “The State of the Kubernetes Ecosystem.”
Two trends that have influenced modern infrastructure are containers and DevOps. The DevOps ecosystem evolved to deliver continuous integration, continuous testing, continuous deployment and continuous monitoring, which increased the velocity of software development. The adoption of containers combined with proven DevOps best practices resulted in rapid deployments at scale.
While containers help increase developer productivity, orchestration tools offer many benefits to organizations seeking to optimize their DevOps and operations investments. Some of the benefits of container orchestration include:
- Efficient resource management.
- Seamless scaling of services.
- High availability.
- Low operational overhead at scale.
- A declarative model for most orchestration tools, reducing friction for more autonomous management.
- Operations-style Infrastructure as a Service (IaaS), but manageable like Platform as a Service (PaaS).
- Consistent operating experience across on-premises and public cloud providers.
IaaS is chosen by operators for control and automation. Developers prefer PaaS for its flexibility, scale and productivity. Container orchestration tools bring the best of both worlds: automation and scale.
Why Container Orchestration Is Needed at Scale
Containers solved the developer productivity problem, making the DevOps workflow seamless. Developers could create a container image, run a container, and develop code in that container to deploy it in local data centers or public cloud environments. Yet this introduction of seamlessness to developer productivity does not translate automatically into efficiencies in production environments.
The production environment is often quite different from the local environment of a developer’s laptop. Whether you’re running traditional three-tier applications at scale or microservices-based applications, managing a large number of containers and the cluster of nodes supporting them is no easy task. Orchestration is the component required to achieve scale, because scale requires automation.
The distributed nature of cloud computing brought with it a paradigm shift in how we perceive virtual machine infrastructure. The notion of “cattle vs. pets” — treating a container more like a unit of livestock than a favorite animal — helped reshape people’s mindsets about the nature of containers and infrastructure.
Both containers and the infrastructure they run on are immutable — a paradigm in which containers or servers are never modified after they’re deployed. If something needs to be updated, fixed or modified in any way, new containers or servers built from a common image with the appropriate changes are provisioned to replace the old ones. This approach is comparable to managing cattle at a dairy farm.
On the other hand, traditional servers and even virtual machines are not treated as immutable — they are more like pets and therefore not disposable. So their maintenance cost is high, because they constantly need the attention of the operations team.
Immutable infrastructure is programmable, which allows for automation. Infrastructure as Code (IaC) is one of the key attributes of modern infrastructure, in which an application can programmatically provision, configure and utilize the infrastructure to run itself.
The combination of container orchestration, immutable infrastructure, and automation based on IaC delivers flexibility and scale.
Features of a Container Orchestration Platform
Putting this notion into practice, containers at scale extended and refined the concepts of scaling and resource availability.
The baseline features of a typical container orchestration platform include:
- Resource management.
- Service discovery.
- Health checks.
- Updates and upgrades.
The container orchestration market is currently dominated by Kubernetes. It has gained the acceptance of enterprises, platform vendors, cloud providers and infrastructure companies.
Container orchestration encourages the use of the microservices architecture pattern, in which an application is composed of smaller, atomic, independent services — each one designed for a single task. Each microservice is packaged as a container, and multiple microservices logically belonging to the same application are orchestrated by Kubernetes at runtime.
The rise of Kubernetes has resulted in the creation of new market segments based on the container orchestration and management platform. From storage to the network to monitoring to security, there is a new breed of companies and startups building container-native products and services. Later on in this series, we’ll highlight some of the building blocks of cloud native platforms and startups from this emerging ecosystem. Next up, learn about Kubernetes architecture.
Feature image via Pexels.