Modal Title

What Does the Open Application Model (OAM) and Rudr Mean for Kubernetes Developers?

How Microsoft's OAM and Rudr can ease the job for th developer working with Kubernetes.
Oct 21st, 2019 8:38am by
Featued image for: What Does the Open Application Model (OAM) and Rudr Mean for Kubernetes Developers?
Feature image via Unsplash.

Last week, Microsoft and Alibaba jointly launched the Open Application Model (OAM), a specification to define modern applications irrespective of where they are deployed. Rudr is an implementation of OAM from Microsoft targeting Kubernetes as the runtime environment.

I spent the weekend understanding the problem OAM attempts to solve. I also refactored some of my favorite microservices-based apps for Rudr. This article and the following tutorial will help an average Kubernetes user to understand the motivation behind OAM.

Let’s admit it — Kubernetes is a complex platform with many moving parts. Mapping and deploying a simple two-tier web application involves the creation of Storage Classes, Persistent Volume Claims, Persistent Volumes, Secrets, ConfigMaps, Services, Deployments, and Ingress. Production deployments will also need robust logging, monitoring, security, availability, and scalability which will lead us to StatefulSets, Network Policies, RBAC, Admission Controls, Horizontal Pod Autoscaling and more.

For a developer and administrator transitioning from traditional IT environments, Kubernetes is overwhelming and also intimidating. Even those DevOps professionals familiar with containerization find Kubernetes a hard nut to crack.

When translated into deployable artifacts, a simple two-tier web application may have over a dozen YAML files that contain the definition of each object belonging to the same application.

One of the core design principles of Kubernetes is the loose coupling of objects. For example, a Service can exist independent of Pods. a Persistent Volume can be created without any consumers. An Ingress can be provisioned without any backends to serve the requests. Based on a set of labels, annotations, and selectors, the dots are connected at runtime. A Service will forward the request to one or more Pods that match the criterion. The same is the case with an Ingress routing the traffic to one of the Services.

Each object in Kubernetes is autonomous and completely independent. While this design makes Kubernetes extremely scalable, it has the side-effect of a lack of application context. An application in Kubernetes is a collection of autonomous objects that work in tandem. When translated into deployable artifacts, a simple two-tier web application may have over a dozen YAML files that contain the definition of each object belonging to the same application. Managing and maintaining these artifacts under a single context is the biggest challenge in dealing with Kubernetes.

Helm attempted to solve this through the notion of a Chart. But even with it, you tend to lose the context post the deployment. After all, Helm is just an aggregation of multiple Kubernetes object definition that is needed for the application to work.

One of the other challenges with Kubernetes is the blurring of lines between the developers and operators. Developers need to know quite a bit about the runtime environment to effectively take advantage of the platform. They need to understand how ConfigMaps become visible to the containers packaged within a Pod. They need to know which part of the initializing code should be packaged as an Init Container. Operators are responsible for ensuring the right naming convention to make the service discovery work. They need to know the right environment variables that need to be passed to the Pod. Operators should decide whether a container should be deployed as a ReplicationController, DaemonSet, or a StatefulSet based on the characteristics of an application. They need to choose between a ClusterIP and NodePort while exposing a Deployment.

As you can see, developers are expected to be familiar with runtime decisions and operators should know the design aspects of the software.

OAM aims to solve these challenges through the following:

  • Bring application context to microservices deployments
  • Clean separation of concerns between Dev and Ops
  • Runtime-agnostic application modeling

At a high level, OAM is a specification to define a microservice or a set of microservices that belong to an application as a Component. Each Component will have one of more Workloads that may act as a server, a consumer, or a run-to-completion job. Each Workload may have an associated Configuration and Traits. Configuration translates to the parameters passed to a Workload and Traits influence the runtime behavior of the Component. A collection of homogeneous Components belongs to a single Application.

OAM’s core premise is that the job of a developer ends with building the container image from the source code and an operator’s job starts from there. Ops team is responsible for the configuration and deployment of a set of container images as a single application.

Components in OAM are designed to enable developers to declare, in infrastructure-neutral format, the operational characteristics of a discrete unit of execution. Components define the CPU, GPU, memory, and disk requirements along with the target OS and architecture.

Each Workload within a Component can be one of the following types:

The Configuration typically deals with the parameters passed to the Workload. For example, the database connection string sent to an application server Workload is defined in the Configuration.

Traits define the runtime behavior of Workload and thus an application. Rudr, the reference implementation of OAM supports the following Traits:

If we observe the Workload and Trait descriptions carefully, they can be easily mapped to Kubernetes. A Server is essentially a Deployment while the Singleton Server is a Deployment with one replica. Both of them are associated with a ClusterIP or NodePort Service. A Worker and Singleton Worker are Pods with no corresponding Service. A Task is a parallelizable Kubernetes Job while the Singleton Task is a single run to completion Job.

Similarly, Traits map to Kubernetes Horizontal Pod Autoscaler, Ingress, Deployments, and Persistent Volume Claims.

So, with OAM and Rudr, developers commit code and build container images that translate to a Workload. Operators assemble the Workloads into a Component and define Configuration and Traits from them.

Technically, OAM can target Virtual Machines (IaaS), platforms (PaaS), and container management platforms (CaaS) with a single spec. Each building block of OAM can be mapped to the respective environment. The YAML files containing OAM definitions can be deployed in any environment without any modification.

In the next post in this series, I will walk you through an end-to-end tutorial of Rudr where I show the workflow involved in deploying Components, Configuration, and Traits for a Node.js web application. Stay tuned.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.