TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Kubernetes / Microservices / Storage

Stateful Workloads on Kubernetes with Container Attached Storage

Kubernetes wasn't built to handle stateful applications. Container Attached Storage (CAS) tools like OpenEBS, however, can help overcome this challenge. Learn how to leave dev workarounds behind with CAS.
Aug 11th, 2021 3:00am by
Featued image for: Stateful Workloads on Kubernetes with Container Attached Storage
Featured photo by Steve Johnson on Unsplash.

Kubernetes can be downright magical in the way it orchestrates container-packaged microservices. But it’s always had a major design quirk: It wasn’t built to handle stateful workloads — databases and key/value stores, for instance, or any other app that saves client data from its activities to use in future ones. Containers were built to handle stateless apps, with a priority on keeping them flexible and portable.

This situation has presented numerous challenges for developers. Among them: it creates bottlenecks. If teams within an organization customize their storage policies to match their workloads — yet share data storage — then application deployment can slow down, as stateful applications churn through the “input/output blender,” vying for priority.

And yet, stateless applications need to work in tandem with stateful ones, says Kiran Mova, co-founder and chief architect of MayaData. “As an industry, we like to talk about stateless and serverless,” Mova said. “But they don’t exist in isolation, they always work on top of some state that has to be stored somewhere.”

Furthermore, 55% of participants in the Cloud Native Computing Foundation’s 2020 survey said they run stateful applications in containers in production. The survey, which saw more than 1,300 responses, found that only 22% of survey participants don’t run stateful apps in containers.

Container Attached Storage (CAS) has emerged to help developers avoid the I/O blender entirely. It’s designed to allow Kubernetes (K8s) to handle stateful workloads. CAS consists of microservice-based storage controllers that are orchestrated by Kubernetes; they can run anywhere that K8s does—public cloud, on-prem, bare metal. (It can also run on a traditional Storage Area Network, or SAN.) The solution allows developers to set up elastic block storage for their stateful apps’ data.

“CAS provides the types of features expected in mature enterprises,” said Chris Evans, a consultant and industry analyst who founded Architecting IT. “This is providing resilient storage for both scale-up and scale-out applications.”

The Container Attached Storage Solution

Before the advent of Container Attached Storage, developers working with Kubernetes had to get creative with workarounds in order to handle stateful applications, according to Evans.

“Developers have needed to rely on scripts and other home-developed automation that can be used to track the location of data,” Evans told The New Stack. “These solutions aren’t scalable and [are] subject to errors — and ultimately, data loss. Some CAS-type functionality can be achieved using external storage arrays, but the biggest difficulty is mapping the application to the external storage.

“The only other alternative is to lock an application to a node, which defeats the purpose of scale-out resiliency.”

When building at scale, these workarounds can significantly hinder developer velocity. To meet the needs of developers working with Kubernetes at scale, the CAS field has grown to include tools from PortWorx, Rancher, Robin, Rook, StorageOS and MayaData.

OpenEBS, an open source CAS tool introduced by MayaData, has been a Cloud Native Computing Foundation (CNCF) sandbox project for two years. It works with multiple storage engines: cStor, Mayastor, Jiva, or Kubernetes Local Persistent Volumes (Local PV). It’s been adopted for us by enterprises including Bloomberg, Comcast, FlipKart and Verizon.

The project began with engineers at MayaData taking a look at the input-output (I/O) controller software portion of stateful applications. “Can we benefit from containerization and Kubernetes? That was the question that we asked when we started opening this project,” Mova said.

Innovations in hardware also sparked the creation of new tools in not only storage but also networking, he noted, from many corners besides MayaData: “It was time to rewrite the I/O controllers. Can we write them in a better way?”

The Advantages of Container Attached Storage

In a CAS, data is accessed via containers, rather than being stored off-platform. It allows developers to set their own block sizes, back-up policies, and replication patterns — without needing authorization from a central storage authority before being deployed.

The key advantages of CAS include:

  • It’s native to Kubernetes, built to work with it from the start
  • Each workload and team can create its own system for handling data storage.
  • Each workload can use its own storage engine
  • The tools are open source, so there’s no danger of getting locked in with a vendor that may not last.
  • Data is kept locally in the Kubernetes cluster, able to be replicated to other hosts as needed.
  • Data storage is horizontally scalable.

In a CAS, the component parts of a traditional storage controller have been decomposed into parts that can run autonomously. Treating storage as microservices also means input/output (I/O) is distributed, so that the “I/O blender” bottleneck ceases to be a concern.

Another advantage of a CAS: enhanced observability. “One of the architectural points of CAS is that you know exactly where your data is,” Mova said. “If you use a distributed system, when you write some file or data or some user information, it gets into the cloud and it can get distributed anywhere. But with this setup, you know, exactly [that] it’s coming from this service and landing onto this database. And this database is writing it down to these particular nodes. And these nodes are writing to these disks.

“You get that kind of visibility — and all of this, you can get through cube CTL commands. So administrators have higher flexibility into seeing what and how it works, which also helps them to operationalize and implement new policies very easily.”

Inside OpenEBS

To use OpenEBS, the platform site reliability engineers (SREs) set up Kubernetes nodes with the required storage. Next, either the SREs or the Kubernetes administrators will set up OpenEBS and create storage classes.

Then, developers can create stateful workloads with Persistent Volume Claims (PVCs). OpenEBS then creates Persistent Volumes, using data engines, CSI and Kubernetes extensions.

The PVs, running on data engines like cStor, Mayastor or Jiva, in turn create Target Volumes, which replicate on other nodes. Local volumes can also be created.

It all looks like this:

OpenEBS Architecture

OpenEBSarchitecture.png

Control Plane

OpenEBScontrolplane.png

Data Plane

In practical terms, Mova said, this means that a developer who is creating a stateful app and needs access to a database platform like MongoDB can use OpenEBS to allow the container data storage runs, just like any other application within the Kubernetes cluster.

“You can bring up your minikube or K3s, you can spin up a MongoDB on it, the persistent volume will be given by the CAS,” he said. “It can actually use your host storage that’s available to provide that volume. So it gives you that experience of using the standard constructs of Kubernetes to create a stateful workload with a volume. But it’s actually using the resources that are available on the developer’s machine to provide that.”

For a developer, this all means greater autonomy and speed — and greater flexibility, enabling them to use the tools that work best for them in each use case. “It decouples the platform teams a little bit from the application teams, it gives more agility to the application teams to run with the stack that they want,” Mova said. “This is the promise of the cloud native, of the microservices approach, right?

Case Study: OpenEBS at Bloomberg

At Bloomberg, the financial-data and media giant, engineers in the data and analytics infrastructure group began taking a look at OpenEBS about two and a half years ago, as part of its exploration of CAS tools, according to Steven Bower, the group’s lead. The group provides a suite of compute (artificial intelligence, stream processing) and data services (RDBMSes, search, NoSQL) to the company’s system engineers, to help them build applications for customers.

Bower’s team, he told The New Stack, focuses on ease of use, reliability and flexibility. “We have lots of different use cases and lots of workloads with each use case, so having a single tool that enables us to model and implement solutions for those use cases is critical.”

OpenEBS was chosen by Bower’s team as its CAS solution for two reasons, he said. “First, the deployment model of having lots of distinct storage clusters per workload fits the model in which we implement our higher-level service offerings.”

Secondly, OpenEBS is open source, he said, “which fits our ideal interaction model for all the data and analytics infrastructure solutions we are bringing into Bloomberg.”

The data and analytics infrastructure group has been using OpenEBS in two use cases thus far. The first is a pilot system that allows users to spin up databases and other data services through an API on Kubernetes, “We have hundreds of active services running on top of this platform on a daily basis,” Bower said.

The second use case is a build system that requires NFS; the team has built on top of ephemeral OpenEBS volumes to simplify the system and reduce its dependence on legacy file servers.

Overall, Bower said, “the biggest advantage I’ve seen is that [OpenEBS] offers a consistent model for storage regardless of the storage engine or whether that storage is local or distributed. This allows us to solve lots of use cases without needing engineers to learn lots of different tools.”

Challenges Yet to Be Solved

OpenEBS carries the advantages of CAS tools, though some challenges remain:

  • Scale-out volumes are not supported. Only volumes with capacity that can be served within a given node are supported. However, the creators of OpenEBS believe the need for large volumes will reduce as more and more workloads move into Kubernetes. The capacity of storage technology has increased in the past few years, from 2TB up to 16 or 32TB, which is more than a single PVC would require, Mova said. “OpenEBS is used as the building block to implement data platforms (like databases) that in turn can support providing PetaBytes of storage to their applications,” he said. In the future, “the components of the data platforms for ease of manageability and scale will only require lower capacity from individual block volumes.”
  • Read-write many use cases are supported via NFS on top of block storage volumes. Read/Write many use cases are served better via object, key/value or API-based interfaces that offer more control and efficiency.

Some data engines may work better than others with OpenEBS.

A recent study by Architecting IT compared OpenEBS to products from StorageOS, Rook and also Rancher’s Longhorn. It found that OpenEBS had a much higher I/O latency than the other CAS tools. The report suggested that using the cStor storage engine was the culprit because it performed better in separate tests using other storage engines.

Regarding the use of storage engines with OpenEBS, Mova said, “cStor is targeted at applications for small- and medium-sized clusters that are primarily looking for simplicity of operations rather than performance. OpenEBS Mayastor and Local PV engines are targeted at applications that need faster performance.”

In general, said Evans, who wrote the Architecting IT study, CAS tools “are still light on some traditional enterprise features,” notably monitoring, and data security and mobility.

He praises OpenEBS’ “pluggable” architecture, which allows it to work with multiple storage engines to suit different application workload profiles. However, he said, greater availability of MayaData’s open source storage engine, Mayastor (still in beta), will be “crucial” to OpenEBS’ success.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.