Nirmata sponsored this post.
With every major paradigm shift in infrastructure, comes a major paradigm shift in software. Think about it: Mainframes led to batch and procedural programming. Client-server led to object-oriented programming and early attempts at distributed objects (CORBA anyone?). Now, cloud computing is transforming how software is architected, designed, distributed and consumed. This makes sense, as software development, software delivery and software runtimes all need to leverage the underlying infrastructure.
Today, every business is a software business, and software has become business-critical — or at least as critical as other core infrastructure services. Businesses that deliver value faster to their customers will win. And, software development and operations skills have become a fundamental business differentiator. Cloud native enterprises leverage best practices in cloud computing to accelerate time-to-value.
In this post, I describe how IT leaders capitalize on the potential of Kubernetes across their enterprise.
More specifically, I:
- define what it means to be cloud native;
- help you understand where Kubernetes fits;
- discuss the attributes of the environment that IT operations teams will require;
- dissect the supporting technologies;
- and describe the architecture required to make your cloud native projects successful across the enterprise.
Defining Cloud Native
In simple terms, cloud native systems are designed to leverage cloud infrastructure — for example, platforms that are on-demand, elastic and resilient. But this does not tell us how to build a cloud native system. Luckily, the Cloud Native Computing Foundation (CNCF) has worked on a more detailed definition:
“Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”
Let’s dissect the CNCF definition of Cloud Native, in order to take a look at the key attributes, as well as the technologies used to build cloud native systems.
Cloud Native Attributes
- Scalable: cloud infrastructure services are available on-demand, and applications built for clouds are designed to leverage this “on-tap” nature of underlying services. Cloud native systems must be able to scale-up, and scale-down, based on load, performance, or other criteria;
- Resilient: as Werner Vogels famously declared, “Failures are a given and everything will eventually fail over time.” Cloud native systems are designed to tolerate failures. By decomposing larger applications into smaller components, the impact of failures can be limited;
- Manageable: what does it mean for software systems to be “manageable”? Well, manageable implies easy to manage, and systems that are configurable and can be easily modified or updated, without loss of service, can be considered easy to manage. It’s no coincidence that some of these very same behaviors are defined as best practices such as 12-factor apps or microservices-style architectures;
- Observable: in control systems and reactive design patterns, an Observable is an object that emits events that allow an external entity (the Observer) to easily infer internal states and conditions. Similarly, cloud native systems are designed for observability and provide detailed events on system state changes and other conditions.
Cloud Native Technologies
Now that we understand what attributes a cloud native system must posses, let’s take a look at some of the key technologies and techniques used to build cloud native systems:
- Declarative APIs: a declarative API captures user intent i.e. the desired state of the system, without concern on how the system is expected to achieve the desired state. In other words, a cloud native system’s interface must provide abstractions that allow users to specify what they want vs. how the system should behave. In contrast, an imperative interface allows users to specify instructions that the system follows;
- Containers: containers have rapidly emerged as the best way to package, distribute, and operate software components. Cloud-native applications use containers — and Kubernetes as described below — as a fundamental building block and as the basic unit of building and managing application components.
- Microservices: a microservices style architecture decomposes a system into independent services where each service is elastic, resilient, composable, minimal and complete. Microservices share a number of attributes with cloud native systems, as microservices-style architectures are the first major software architectural paradigm to emerge in the age of cloud computing and DevOps.
- Service Meshes: microservices-style architectures eliminated the monolithic middleware and centralized intelligence of service brokers found in SOA systems by pushing intelligence back to individual services. However, several common functions for service-to-service communications, such as management, observability and security, now need to be handled by a service. A service mesh addresses this problem, and provides distributed infrastructure to manage inter-service communications.
- Immutable Infrastructure: the basic concept of immutable infrastructure is to “replace, not repair.” Cloud computing enables this paradigm. A great way to understand this is from Randy Bias’ description of the Pets vs Cattle What’s important for cloud native systems is to decouple applications from infrastructure, and treating infrastructure as immutable is a great way to achieve this.
Where Does Kubernetes Fit?
So far, we have not mentioned Kubernetes. So, where does Kubernetes fit and what does it enable for cloud native systems?
Not only does Kubernetes enable all of the technologies described above, but it also acts as the “control plane” for cloud native systems. To understand what that means, we need to first discuss a key architectural pattern and its application in scalable system design.
Layered Architectures and the Three Planes
A key to building scalable systems is decomposing the system into parts, such that each part packages relevant behavior and data, and provides abstractions that can be reused by other parts of the system — or by external systems. A common expression of this simple, yet powerful, concept is the layered architecture pattern. With this pattern, each layer of the system performs a well-defined role utilizing layers below it and providing new abstractions to the layers above it.
Some of the largest, most complex and most reliable systems built are in the domain of telecommunications and networking. In a telecommunications network, the layered system architecture is used to define three layers or “planes.” Each plane encapsulates different protocols and behaviors leading to a scalable and resilient system:
- Data Plane: the data plane provides functions and protocols to carry end-user data or traffic. In a telephony system, this can be a call path; in networking, it’s the network packets and flows;
- Control Plane: the control plane provides functions and protocols that coordinate processes in the data plane. For example, in telephony this is signalling and call processing, and in networking it’s the routing and forwarding functions;
- Management Plane: the management plane provides administrative functions to configure and operate all functions, and devices, in the control and data planes. In telephony the acronym FCAPS (Fault, Configuration, Accounting, Performance, and Security) is used to describe the collection of management functions required for system operations.
Now that we understand the three planes, let’s apply these to cloud native systems. The following table summarizes each system plane, and shows their corresponding functions in a cloud native system:
|Telephony||Networking||Cloud Native Systems|
|Data Plane||Call flows||Data packets||Container Runtimes|
|Control Plane||Call processing||Routing & Forwarding||Container Orchestrators
|Management Plane||FCAPS (Fault, Configuration, Accounting, Performance, Security)|
The data plane in a cloud native system are the container runtimes, that leverage compute, network and storage from the underlying infrastructure. The control plane of a cloud native system is a container orchestration and management system, like Kubernetes, and other application control functions such as a Service Mesh.
As in the other domains, the management plane brings everything together by integrating functions like fault management (alerting), configuration management (provisioning), accounting (billing and metering), performance management (metrics and monitoring) and security management.
While it’s certainly possible to manage switches and routers individually via CLIs and local interfaces, it’s not practical or cost-effective to manage any sizable deployment in that manner. As with any new technology, early adopters tend to roll their own management plane solutions. However, as the technology matures so do integrated tools and systems that help provide comprehensive management functions.
In the next section, we will look at the key attributes and functions of cloud native management.
Cloud Native Management
Cloud native management is the management plane for cloud native systems. It provides integrated solutions that make it easy to manage the entire cloud native stack.
Let’s discuss the key attributes and functions of cloud native management.
Key Attributes of Cloud Native Management
- Cloud native: perhaps it’s obvious: a cloud native management solution needs to be built using cloud native principles. This means that that Cloud Native Management has to be scalable, resilient, manageable and observable and built using best technologies such as declarative APIs, containers, and microservices-style architectures;
- Composable: in 2013, Jonathan Murray introduced the concept of the Composable Enterprise. Murray proposes that for successful digital transformation enterprise IT systems should be built using parts that can be easily replaced, without impacting the whole. Similarly, cloud native management has to be composable. It should provide built-in functions where needed, but allow all major functions to be replaced and customized as needed;
- Infrastructure-Agnostic: to achieve resiliency, efficiencies, and scale, applications must be decoupled from infrastructure. Cloud native management needs to be infrastructure and cloud-agnostic. A single management plane should be able to manage clusters across public clouds, private clouds and data centers, and edge computing deployments.
Key Functions of Cloud Native Management
Now that we have defined what cloud native management is, let’s describe what it must do:
- Kubernetes Cluster Operations: Cloud native management solutions must be able to install and operate Kubernetes clusters, as well as provide common management functions for externally managed clusters (e.g. those delivered by cloud providers);
- Kubernetes Workload Operations: Cloud native management solutions must allow users to model, deploy, and manage Kubernetes workloads. The workloads may be cluster services used across applications, or end-user Kubernetes applications;
- Multi-Layer Monitoring: Cloud native management systems must collect, correlate, and aggregate metrics from each layer of the cloud native stack. This includes container hosts, clusters, and workloads. A federated metrics data collection pipeline is used to provide current data as well as support for downsampling and long term storage of historical data;
- Integrated Alarms and Notifications: Cloud native management solutions must support configurable alarms (alerts) that can be created from any metric, state or condition in the system. An important requirement is to be able to correlate the alarm to applications, environments, and end user SLAs, so that the appropriate severity can be assigned;
- Log Management: Cloud native management systems must enable log collection and aggregation, with provisions to send logging data to central logging repositories. Log data must be enriched at source, to allow for segmented access;
- Audit Trails: Cloud native management systems must record all changes made to the management, control, and data planes with identification of the entity or role performing the changes. An important feature is to be able to trace a sequence of changes triggered across system components, including system actions, so that any change can be correlated back to a user action;
- Federated Change Management: Cloud native management systems must allow managing changes across clusters. This includes application container image changes, as well as resource manifest configuration changes. Cloud Native Management systems should be non-opinionated about push or pull models for change management, but provide building blocks that allow different teams to decide what works best for them;
- Federated Identity and Access Management: Cloud native management must enable federated access control and authorization across clusters. This means that all cluster access must be tied back to a single identity management system, so that a single update can be used to terminate access for a role or user;
- Policy-based Governance: Cloud native management must provide policies to validate and transform (mutate) configurations. The policies should be enforceable on a granular basis, using familiar entities such as applications and runtime environments;
- Security and Control Plane Integrations: Cloud native management systems must integrate well with data and control plane security and control solutions. For containerized applications, this includes integrations with image scanning and provenance systems, compliance management systems, secrets management systems, as well as firewalls and network policy enforcement systems. In addition to security, Cloud Native Management systems must provide the ability to manage inter-service communications via service meshes;
- Operational Insights and Best Practices: Cloud native management systems must check for operational best practices and provide actionable recommendations. Ideally, these best practices are configurable and extensible, so that operators can customize them to their needs.
You can download a comprehensive evaluators guide and checklist on the Nirmata site.
Software ate the world, and the ability to build, deploy and operate software have become critical to the success of all businesses. As applications are migrated to cloud, and as new cloud native applications are being built, containers have become the standard packaging and runtime for cloud native applications. Kubernetes, and technologies such as service meshes, have emerged to provide control and management for microservices-style applications packaged and deployed in containers.
Mission-critical systems such as those in the domains of telephony and networking are built using the layered architecture pattern, where the system is composed of a data plane, a control plane and a management plane. By applying these principles to cloud native systems, we can map container runtimes to the data plane and Kubernetes and service meshes, to the control plane of the system. However, a critical component that is necessary to operationalize cloud native systems is the management plane.
In this post, we defined the key attributes and functions of cloud native management and the management plane for cloud native systems. Cloud native management must be built using cloud native principles that are composable and infrastructure-agnostic. Cloud-native management must also provide integrated and multi-layer management functions across configuration, fault, metering, alerting and security.
The good news is that solutions for cloud native management are rapidly maturing within the CNCF ecosystem. And as enterprises embrace cloud native management, the full potential of cloud native systems and technologies like containers and Kubernetes will become available for mainstream adoption. It’s an exciting time to build!
The Cloud Native Computing Foundation is a sponsor of The New Stack.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE.