NGINX sponsored this post.
Applications have fundamentally changed with the shift toward cloud native approaches powered by Kubernetes and containers. Modern applications are commonly API-based, deployed more frequently across the globe in multiple environments, and made up of small microservices. This paradigm shift has created several challenges — chief among them is how to effectively manage and secure a complex portfolio of microservices managed by distributed teams.
A recent report revealed that more than 380,000 Kubernetes APIs around the world were exposed to the public internet without proper security policies. This widespread vulnerability isn’t simply a security failing. It’s a symptom of the growing complexity that practitioners face. Distributed environments lack adequate visibility and create gaps in governance, expand the threat surface and increase the likelihood of outages due to misconfiguration of clusters and services.
As more enterprises deploy cloud native applications, the need arises for a management plane to abstract and simplify this complexity. As a brief recap, the cloud native and container management realm operates on three different planes: data, control and management.
- Data plane — Houses and transports application and data traffic.
- Control plane — Configures rules for the data plane.
- Management plane — Sets guardrails for the data and control planes.
A meta-layer that floats “above” the control and data planes, the management plane operates at a higher altitude in the stack where it is possible and necessary to set global policies and configurations that apply across all applications, APIs and microservices. This layer can also govern and apply policy by application groups, types or geolocations. It’s key to distinguish between the control plane and the management plane and understand how they can overlap.
In this article, we define each plane and highlight key distinctions between the control and management planes through the lenses of altitude, business case and technical requirements.
A Data Plane Recap
The data plane is where the rubber hits the road. User experience, latency and all other key metrics that determine application performance depend on a responsive, reliable and highly scalable data plane. This is why your data plane is not a commodity and is crucial to building high-performance modern apps at scale. The data plane is fundamentally the layer for implementing what an application is supposed to do and how it’s supposed to behave. All policies, service-level agreements (SLAs) and scaling or behavior triggers — like retries, keepalives and horizontal scaling — are executed within the data plane.
The data plane looks a bit different in Kubernetes than in older architectures. It consists of worker nodes with their pods and containers communicating via their kubelet agents, which share the state and conditions with the container engine and the database that maintains state information (for example, a key-value store like etcd). Each node has a kubelet, which receives configuration instructions from the API server (control plane). While somewhat different in construction and design than the data plane used in traditional three-tier web apps, the function of the Kubernetes data plane is roughly the same — to make sure apps perform well.
The Control Plane and the Management Plane
The control plane resides above the data plane as a separate entity. Originally a policy engine for Layer 4 networking, in Kubernetes it also has some influence over Layer 7 traffic. The data plane directly controls the flow of data through applications and the way applications behave at the pod level. The control plane formulates and distributes guidance to the data plane, overseeing orchestration and coordination of containers, nodes, pods and clusters. The control plane’s components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when an existing one fails or becomes unresponsive).
Control-plane components can run on any instance in the cluster. However, provisioning scripts typically co-locate all control-plane components on the same machine and segregate this machine by not running user containers there. In a nutshell, the control plane is like the traffic cop, enforcing the rules of the road for data whizzing around in the data plane.
Here is where I want to introduce the management plane, the plane that we see Platform Ops teams creating to enable more agile and developer-centric application development. Although its function is similar, the management plane rides above the control plane. This higher layer is designed to streamline and simplify configuration of the control plane for easier scaling, observability and resilience.
Why We Need a Management Plane
In the era of modern apps, it’s unrealistic to ask the teams building microservices to learn how to manage the data and control planes. The learning curve creates not only an additional burden but also a failure point. At the same time, in the shift-left era, organizations need to expose the power of the control plane to a broader array of stakeholders so they can be more effective in their work. The management plane is more of a SaaS-like interface that enables even semitechnical team members to make decisions on application policy, governance and behavior.
While a service mesh can cover some of the ground here, there are many parties that can benefit from a separate and robust management plane. These may include network operations teams and lines of business (marketing teams, security teams, compliance teams and so on). The management plane is also the place where Platform Ops teams can put transparent guardrails in place to ensure that users don’t hurt themselves or others. These guardrails allow teams and organizations to move faster, ship code more quickly and operate with considerably less risk.
The need for a management plane is becoming more acute as organizations continue to atomize services and functions down to more discrete elements, each of which requires control and governance. The core system used to run applications is growing more complex; this is precisely why Kubernetes is used to manage distributed, containerized applications of all different shapes and sizes. This complexity is not going to reverse any time soon.
More Complexity Requires Smart Abstractions to Future-Proof Adaptive Apps
As we move toward increasingly distributed applications running from edge to cloud, and as application deployment environments become even more diverse, we need to give application teams more choices to help them shift left. Shifting left expands beyond application teams to other teams that are less technical (marketing, compliance) or highly technical but overtaxed (network operations), giving them new capabilities to do their jobs better.
In this hybrid and fast-evolving reality, a management plane is needed to effectively connect, operate and secure a complex portfolio of microservices and applications. At NGINX, we are building a suite of tools to ensure observability, reliability, governance and security across all three planes. Organizations that thrive will have to play well at all three planes, and design for the big picture of deploying, managing, securing, and iterating on modern apps.
Feature image via Pixabay