Favorite Social Media Timesink
When you take a break from work, where are you going?
Video clips on TikTok/YouTube
X, Bluesky, Mastodon et al...
Web surfing
I do not get distracted by petty amusements
API Management / Kubernetes

The Flywheel Effect of Kubernetes APIs

A look at new areas where Kubernetes-like declarative APIs and complementary developer tools like Dapr are gaining popularity and becoming mainstream.
Nov 22nd, 2022 10:00am by
Featued image for: The Flywheel Effect of Kubernetes APIs
Image via Pixabay.

Kubernetes is the de facto standard for applications written in different languages and from different business domains to be kept at the desired state. But Kubernetes adoption is far from over, and its flywheel effect is only accelerating and spreading into new frontiers.

Kubernetes is turning into the unifying declarative API and reconciliation mechanism not only for managing on-cluster applications, but also off-cluster remote resources and multicloud deployments. Inspired by Kubernetes and built on the same cloud native principles of polyglotism, heterogeneous applications, and multicloud, the Dapr project complements Kubernetes by improving developers’ productivity the same way Kubernetes improves operation teams’ productivity.

In this article, we will explore the new areas where Kubernetes-like declarative APIs and complementary developer tools such as Dapr are gaining popularity and becoming mainstream.

The Container Management API

Microservices deployments are what made Kubernetes what it is today. Kubernetes introduced a bunch of APIs and guarantees (some explicit and some implicit) that serve as common axioms among developers creating distributed applications at high pace and operations teams keeping these applications running at high scale.

These axioms serve as a premise for reasoning how an application is deployed and run on any infrastructure and kept at the desired state. What makes up this contract? It is a combination of application- and platform-related life cycle APIs and guarantees, such as:

  • Container-based application packaging with resource constraints
  • Health checks and application life-cycle policies
  • Easy application scaling for stateless, stateful, or job-based workloads
  • Declarative deployment strategies for fail-save rollouts and rollbacks
  • Policy- and dependency-based automatic application placements, etc.

In essence, Kubernetes accelerated distributed application architecture adoption by providing a standard interface between the application and the underlying infrastructure, hiding away the shape of the infrastructure and making its unreliability inconsequential to the application.

Kubernetes APIs as infrastructure abstraction

While this enhanced the ability of operations teams to manage highly distributed applications at scale, it didn’t offer the same gains to the developers creating such applications. In fact, the distributed architectures increased the accidental complexity the developers have to deal with rather than focusing on implementing the application business logic. Kubernetes alone doesn’t offer a uniform declarative API and a portable implementation that can hide the complexity and the fallacies of the networking. That is an area left to developers to address and other projects to fulfill.

What we describe in this section is not new, but is an important realization that will help us see where else the Kubernetes API is spreading and where it is lagging behind. Let’s discuss the next area of concern.

The Third-Party Service Management API

The Kubernetes API provides a standard interface for managing compute, networking, and storage on any public or private cloud. But cloud computing is more than raw infrastructure, and today large cloud providers and smaller specialized SaaS providers offer higher-level services such as databases, caches, key-value stores, file buckets, message queues, stream processors, and more.

These vendor-specific services are hosted on remote cloud networks and accessed through specialized APIs. This is an attractive offering because such services are provisioned quickly, scale easily, and update instantly. This model reduces the time spent on installation and configuration, and its rental multitenant nature offers lower cost compared to the traditional model. These advantages make third-party services a mandatory dependency for many applications running on Kubernetes.

One challenge of using third-party services is that the life-cycle management of these services varies among vendors. Different cloud vendors have different control plane APIs for the provisioning and management of their services. Even if you are provisioning the same type and the exact same software (such as a PostgreSQL server version X), the APIs and semantics among Amazon Web Services (AWS) and Google Cloud Platform (GCP), Azure, and others will vary.

Not only that, but the provisioning, governing, accessing, and ongoing management of these services will vary significantly from the way your applications are managed on Kubernetes, leading to more complexity and duplication in tools, practices, and effort.

What if these higher-level services from different underlying providers could be delivered through the same Kubernetes APIs and with the same level of consistency offered to containers running on Kubernetes itself? What if it can be done with the same Kubernetes declarative approach and the control loop mechanism? This would allow the reuse of Kubernetes semantics, APIs, tools, and practices for managing external resources as if they were containers on Kubernetes. And that is exactly what is happening in front of our eyes with projects such as:

These projects use Kubernetes custom resource definitions (CRDs) as an emerging uniform standard for describing cloud services and the reconciliation pattern for the governance of these services. This allows operations to manage all external resources consistently with the applications running on Kubernetes and expose service endpoints and secrets using Kubernetes resources.

Kubernetes APIs for third-party service orchestration

While this approach enables operations teams to benefit from the rich Kubernetes ecosystem of tools and practices, it doesn’t offer the same benefits to developers who have to use these services. Developers still have to use libraries and frameworks with varying quality and language semantics to interact with the myriad of third-party services. It is left to developers to patch and upgrade their applications when a third-party library changes.

It is left to tech leads to pursue consistency, reuse, and best practices when interacting with similar third-party services from different providers. The projects listed here will help operations teams to manage these external services, but not the developers who have to use the services. Not a great cloud native developer experience, is it?

The Multicluster Management API

Having multiple cloud deployments, whether public or private, is becoming more and more common across enterprises. This is sometimes on purpose, driven by the need for specific cloud services available only on a select cloud provider, scale, or isolation needs, or by accident due to acquisitions or shadow IT. Regardless of the reasons, today, multicloud complexity is the ugly reality many organizations have to live with.

Similar to multicloud, running multiple Kubernetes clusters is not out of the ordinary, either. A big driver for a multicluster Kubernetes architecture is the need for workload isolation. Teams from within the same organization can require space for experimenting with CRDs and operators.

Kubernetes’s namespace-based isolation mechanism is insufficient in these scenarios, especially with the popularization of CRDs and operators, which are cluster-scoped entities and require cluster-level access. All of these are reasons for provisioning new Kubernetes clusters, and it is not too long until the operations team finds themselves juggling multiple Kubernetes clusters.

There are a number of emerging projects that use the Kubernetes API for managing multiple other Kubernetes clusters and application workloads. A few better-known projects and managed services are:

These projects vary in supported cluster types that they can manage and what you can do with them. But generally, they offer end-to-end visibility and control for managing multiple Kubernetes clusters. That typically includes deploying applications, ensuring security, and compliance that spans across data centers, at the edge, and in multicloud environments.

Kubernetes APIs as multicluster abstraction

This allows operations to manage multiple Kubernetes clusters from a single pane of glass, almost as if they were a single Kubernetes cluster. That means the same application can be deployed not only to multiple zones within the same Kubernetes cluster, but also to multiple Kubernetes clusters on different regions and even on different cloud providers. This can be driven by the global deployment needs of the application, for migration and modernization reasons, or time-critical disaster recovery procedures.

For this to work, though, Kubernetes alone is not enough. The application itself also has to be written with multicloud in mind. If the application is implemented and packed with the client libraries of one cloud provider, running it on another cloud provider can limit its portability.

If you have coupled your business logic with the low-level features of a cloud service, it might require a significant rewrite before you can run the application on another cloud. In essence, to create portable multicloud and multicluster applications and to reuse tools, practices, and patterns across clouds, you need to apply the same cloud native operational principle at development time, too. That is, use cloud native tools and automation that abstract away any language and third-party application dependencies and tools.

The Empty Space in Between

The above-mentioned Kubernetes-based API trends bring consistency and reuse of operational tools and practices boosting the effectiveness of operations teams. But they do not address the developers’ needs at the same level. For example:

  • Kubernetes helps operations teams keep a large number of containers running, but it doesn’t help developers implement reliable service interactions using their language and framework of choice.
  • Kubernetes helps operations provision and manage third-party resources uniformly, but it doesn’t help developers using myriad different libraries in different languages to consume these third-party services in a uniform fashion.
  • Kubernetes helps deploy containers in multiple clusters, but it doesn’t help the creation of portable multicloud applications that are independent of cloud services semantics and integration patterns.

There is an empty space between an application’s business logic and the other systems that are left to developers to fill in. It’s often filled with pre-cloud native technologies available only to a few programming languages or frameworks that get embedded within applications.

Combining such pre–cloud native technologies focused on a single language or single cloud with Kubernetes-like technologies built on the principles of language and cloud portability, declarative APIs, pluggability, and reuse leads to technology impedance mismatch and unnecessary development effort.

Similar to the way operations teams benefit from Kubernetes, we need declarative capabilities, reusable patterns, and portable implementations to help developers create distributed applications the cloud native way.

The Dapr project is influenced by the portability of containers and the declarative Kubernetes API. It offers portable, polyglot, API-driven distributed system primitives such as third-party services connectors, portable resiliency policies, and polyglot observability capabilities for application creators.

Dapr APIs help implement distributed applications, Kubernetes APIs help operate them

Dapr makes it easier for developers to build resilient, stateless, and stateful distributed applications by allowing them to focus on writing business logic and not solving distributed system challenges. It improves developers’ productivity the same way Kubernetes improves operation teams’ productivity.

Dapr is for creating applications what Kubernetes is for operating applications. It is inspired by Kubernetes, but with the mission to fill the empty space left by Kubernetes. If you’d like to learn more about Dapr and see how we operate Dapr reliably and securely on your Kubernetes cluster, book a free trial for the Diagrid Conductor service.

With the software stacks getting deeper and the pace of change increasing, the era of big code is upon us. We need new paradigms, patterns, and tools to help operations and developer teams to grasp this increasing complexity.

Kubernetes’s declarative API and the control loop mechanism started a new wave of thinking and a wave of change in reasoning and managing distributed systems wherever they are: on-cluster, off-cluster, or multiple clusters. On this journey, the Kubernetes flywheel has outgrown containers. Kubernetes is no longer a container orchestrator. Kubernetes is a global resource management API, whether these resources are local containers, remote clusters, or other third-party services.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.