How to Map Application Layers to Cloud-Native Workloads

Cloud-native applications are composed of various logical layers, grouped according to functionality and deployment patterns. Each layer runs specific microservices designed to perform a fine-grained task. Some of these microservices are stateless, while others are stateful and durable. Certain parts of the application may run as batch processes. Code snippets may be deployed as functions that respond to events and alerts.
The depiction here attempts to identify layers of a cloud-native application. Though they are grouped together for representation, each layer is independent. Unlike traditional three-tier applications which are stacked in a hierarchy, cloud-native applications operate in a flat structure with each service exposing an API.

FIG 2.1: Today’s modern application architectures bridge with monolithic legacy systems.
The scalable layer runs stateless services that expose the API and user experience. This layer can dynamically expand and shrink depending on the usage at runtime. During the scale-out operation, where more instances of the services are run, the underlying infrastructure may also scale-out to match the CPU and memory requirements. An autoscale policy is implemented to evaluate the need to perform scale-in and scale-out operations.
The durable layer has stateful services that are backed by polyglot persistence. It is polyglot because of the variety of databases that may be used for persistence. Stateful services rely on traditional relational databases, NoSQL databases, graph databases and object storage. Each service chooses an ideal datastore aligned with the structure of stored data. These stateful services expose high-level APIs that are consumed by both — the services from the scalable and durable layers.
[cycloneslider id=”kubernetes-series-book-3-sponsors”]
Apart from stateless and stateful layers, there are scheduled jobs, batch jobs and parallel jobs that are classified as the parallelizable layer. For example, scheduled jobs may run extract, transform, load (ETL) tasks once per day to extract the metadata from the data stored in object storage and to populate a collection in the NoSQL database. For services that need scientific computing to perform machine learning training, the calculations are run in parallel. These jobs interface with the GPUs exposed by the underlying infrastructure.
To trigger actions resulting from events and alerts raised by any service in the platform, cloud-native applications may use a set of code snippets deployed in the event-driven layer. Unlike other services, the code running in this layer is not packaged as a container. Instead, functions written in languages such as Node.js and Python are deployed directly. This layer hosts stateless functions that are event-driven.
Cloud-native applications also interoperate with existing applications at the legacy layer. Legacy, monolith applications — such as enterprise resource planning, customer relationship management, supply chain management, human resources and internal line-of-business applications — are accessed by services.
Enterprises will embrace microservices for building API layers and user interface (UI) frontends that will interoperate with existing applications. In this scenario, microservices augment and extend the functionality of existing applications. For example, they may have to talk to the relational database that is powering a line-of-business application, while delivering an elastic frontend deployed as a microservice.
Next: Mapping Workloads to Kubernetes Primitives
Each service of a cloud-native application exposes a well-defined API, which is consumed by other services. For intra-service communication, protocols like gRPC or NATS are preferred due to their efficient compression and binary compatibility. The REST protocol is used for exposing services that interact with the external world.
DevOps teams map the deployment and communication patterns with the primitives exposed by cloud-native platforms such as Kubernetes. They are expected to package, deploy and manage these services running in a production environment. Our next article in this series helps with the alignment and mapping of the workload patterns with Kubernetes primitives.
Feature image via Pixabay.