Modal Title
Data / Kubernetes / Software Development

Kubernetes Is Not Psychic: Distributed Stateful Workloads 

By replacing interchangeable stateless replicas with distributed data storage across individual nodes in a cluster, stateful workloads can run risk-free.
Apr 26th, 2023 6:06am by
Featued image for: Kubernetes Is Not Psychic: Distributed Stateful Workloads 

If your business revolves around transactions of any sort, you may feel like the rest of the business world has fully relocated to the cloud and left you behind.

Relational databases are crucial to every form of modern commerce, from shopping to financial services to streaming entertainment — yet they remain artifacts of technological times. The power of the relational database is its utter simplicity: rows and tables define an architecture that has changed little since SQL emerged in the 1970s. But with that simplicity also comes great responsibility: keeping the data in those rows and tables consistent, isolated and durable.

Traditional SQL databases are solid workhorses, but they are also fundamentally stateful. Running stateful workloads in distributed applications, however, has proved a serious challenge and this is the reason that transactional databases have been slow to join the cloud native party.

Not Pets. Not Cattle.

A relational database must guarantee data validity despite cloud provider outages, power failures and every other disaster imaginable. Its fundamental job is maintaining state throughout the entire life cycle of a workload.

Traditional SQL databases were designed to run on a single physical server or, at most, a limited number of servers. This “pets” scenario is ideal for maintaining state. However, cloud native applications are distributed by design across a “cattle herd” of virtual servers, an ephemeral environment of containers composed of networks of stateless nodes, pods and clusters that are spun up (and back down) in response to workload demand. These are allowed to expire when no longer needed or sometimes they simply fail; either way, they are quickly replaced. But stateful workloads are not cattle-friendly. Relational databases in particular must have durable and persistent storage to guarantee data consistency and availability.

Kubernetes Does a Lot of Things Well, but Persistent Storage Is Not One of Them. 

Kubernetes does not provide built-in support for ensuring that data is stored even if a pod or node fails or is restarted. While Kubernetes provides mechanisms for attaching storage volumes to containers, managing and maintaining persistent storage in a distributed environment is not easy.

This is because the platform itself is designed to manage containerized applications, not take on primary storage duties. Kubernetes’ own native storage solutions — such as local storage, hostPath volumes and emptyDir volumes — are ephemeral, so not suited for maintaining state. In the highly likely eventuality of a node failure, this can lead to data loss or inconsistencies, compromising the integrity of the database.

Kubernetes’ ephemeral nature is not the only thing that makes running stateful workloads problematic. Scaling workloads up (and back down) in response to demand is also a problem. Scaling a stateless workload is straightforward: you just add more replicas. However, with stateful workloads, scaling requires more complex operations like adding or removing nodes from the cluster, rebalancing workloads and ensuring data consistency across nodes.

Kubernetes Is Not Psychic

Kubernetes is essentially an engine to generate and orchestrate interchangeable replicas. This simply does not work for stateful workloads like transactions, which have unique states, like writes.

Because of this, distributed stateful workloads must be tightly coordinated with other nodes running the same application to ensure atomicity, consistency, isolation and durability of reliability of data and transactions. Such guarantees are difficult to deliver in Kubernetes. Implementing them in a distributed application can necessitate manual workarounds and sharding, introducing management complexity, not to mention being a major party pooper when it comes time to change or update the application.

Beyond complexity, this will very likely also introduce latency. If a node or cluster fails, it takes time for Kubernetes to assign the failover node or cluster to take on the replacement lead role and to keep the feed data applications accurate. This becomes particularly noticeable when it comes to services like backup and recovery.

Finally, Kubernetes is not psychic. It cannot detect whether an environment uses a single database instance, a leader/follower database cluster or a shared leader configuration. This means building manual scripts to instruct Kubernetes how to intervene between your database and the rest of the application — or else it means sourcing and integrating third-party tools to do this for you. Meaning a choice between more work for you up front plus ongoing maintenance or increased expenditure and complexity (and ongoing maintenance). Either way, adding even more complexity. What’s an application architect to do?

Distribute Your Data(Base)

The challenge, then, is how to achieve data consistency and availability for stateful distributed applications (and databases) in a Kubernetes environment where the longevity of nodes and pods cannot be guaranteed. And further, to achieve this without tying containers to specific data stores, a move that kills the whole concept of portability.

The answer is, don’t replicate your data — distribute it! Use a single logical database that, itself, is built on distributed architecture — aka a distributed SQL database.

A distributed SQL database built atop Kubernetes is custom-architected to handle stateful distributed workloads. It’s the same familiar SQL, but now able to support storing data on individual nodes in a cluster. This means data can be held in different zones for availability. These nodes are capable of receiving and coordinating read and write requests between them without generating conflicts, thus ensuring ACID-complaint distributed transactions.

In a true distributed SQL database, all nodes will be programmed to agree on the state of the data. Achieving this takes a consensus-based replication protocol at its core, such as the RAFT consensus protocol. Such a protocol can deliver ACID-like guarantees because it ensures data is consistent regardless of the node your stateful application talks to.

RAFT works because it ensures that a quorum of the replicas agrees on any changes before those changes are executed. If, for example, your architecture is clustered to three nodes, then a quorum of three is required to guarantee the accuracy of your data. The protocol will then work with a leader node that coordinates all writes in its group and that is replaced should it fail.

The final key ingredient is an efficient key-value storage engine that’s capable of working with the consensus protocol for data reads and writes. What does “efficient” look like in this scenario? It should come with features such as fast bulk data loading and ingestion, a system for regular garbage collection to reduce the size of data on disk, and the ability to take advantage of key features in the SQL standard such as tracking down historical data.

Conclusion

Kubernetes is a powerful platform for managing containerized workloads, but it was not the best choice for a long time for running stateful workloads. However, by rethinking data placement — replacing an army of interchangeable stateless replicas with distributed data storage across individual nodes in a cluster — stateful workloads can run risk-free.

In other words, thanks to distributed SQL, relational databases can at long last join the rest of the stack at the application party in the cloud. Pass the chips and dip!

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.