Application Storage Is Complex. Can Automation Simplify It?

An application typically needs three layers to run: a network, for connectivity; storage, to hold the information; and compute. Among these three, compute has been the primary driver of change over the past two decades, such as compute requirements swelling during the evolution from running applications on bare metal to virtualization to containerization.
This expanding compute layer has increased complexity for both the stack and the people responsible for managing it.
However, this paradigm shift in compute has conversely affected storage, especially in the area of storage for containerized workloads. As enterprises adapt to the way applications are delivered in a hybrid and multi-cloud world, storage encryption and app mobility in particular are giving DevOps teams headaches.
To solve this, one of the most established companies in the space — 38-year-old Dell Technologies — is innovating new ways in which its Container Storage Modules can make life easy for developers and engineers, including the release of two new modules focused on the challenges of encryption and app mobility. But before we explore the solution, we need to dig deeper into the problem.
How Storage Challenges Evolved
In the beginning, there was bare metal: Applications were monoliths and ran directly on the hardware. Transactional databases used storage, which was protected by products made by companies like Dell or EMC (or, after the former company acquired the latter in 2016, Dell EMC).
Then, virtualization — in its heyday at the start of the century — added new layers of complexity. Nivas Iyer, senior principal product manager for Kubernetes and cloud native data protection at Dell, pointed to how a company like VMware accelerated the use of virtual machines in that period, using them to consolidate servers and frontend applications.
The result is, “a minimal footprint of servers, causing data center footprints to shrink. And also compute, in the same regard, was taken one step up.”
With that progress came more complexity. Virtualization “introduced a new set of challenges, in that the storage systems now had to talk to the virtualization layer, and through that to the application,” Iyer told The New Stack.

How storage needs have grown more complex as the architectures that support applications evolved from traditional to virtualization to containerization. (Image courtesy of Dell Technologies.)
When the VMs themselves needed storage, VMware introduced the virtual machine disk file (VMDK), a file format for virtual machines, which is now an open file format. Microsoft introduced the virtual hard disk (VHD), to do much the same thing — though the two require converter software to be compatible, because of course they do.
As a result, “the volume of storage expanded, and also the secondary storage, or the data protection aspect,” Iyer said. “With this expansion came new interfaces, resulting from requirements for the storage system to talk to the virtualization layer. All of these new functionalities, when combined, created a true ecosystem around storage.”
What this means: an increased scope of responsibility and cognitive load on DevOps teams, particularly on developers and storage admins.
More Players, More Demands
As the computing layer grows more complicated and the accompanying storage needs more expansive, more people are needed to manage it all, and they need new skills, said Iyer.
“So now you have more players,” he said. “Earlier on, it was simple: storage admins, server admins, network admins.” But the dawn of VMs demanded the introduction of virtual admins, who understood virtualization, including the ecosystem of tools around it and how they integrate;
Similarly, the introduction of containers, Kubernetes (K8s) — and the sprawling landscape of products and services that have sprung up around K8s — demanded new skills and knowledge used to navigate advanced APIs and interfaces.
As the demand for these niche but deep skill sets rose to prominence, the isolation and disconnect between Devs and Ops became more noticeable than ever before. To supplement the lack of skills and processes, workarounds and the emergence of a “shadow IT” culture quickly became the norm in the name of getting things done.
Enterprises realized the risk this could cause business and pivoted to incorporate the common DevOps practices we see today. These practices not only streamlined processes in which developers and operations teams would collaborate, but also built the foundation for admins to incorporate automation to reduce manual efforts, skill gaps, and the need for workarounds. This automation is particularly relevant for developers and storage admins.
Take for example a Kubernetes-run system with containerized applications, where storage is requested dynamically. Introducing advanced storage capabilities into this environment, such as replication and resiliency, requires communication with the K8s container storage interface (CSI) API. In order to do so, K8s requires some plug-ins, and potentially automation, to provide those capabilities and make life easy for your staff managing the storage.
This is the particular challenge that Dell has sought to address with its Container Storage Modules.
How Container Storage Modules Fit In
In 2019, Itzik Reich, now Dell’s vice president of technologists, infrastructure solutions group, shared plans at the company’s user conference to integrate Dell’s storage arrays with the Kubernetes CSI API. As detailed in a blog from Reich two years later, attendees approached him with concerns after that 2019 presentation: How would they keep track of what’s happening in the storage arrays?
In the wake of that event, Reich wrote, “I gathered a team of product managers and we started to think about upcoming customer needs. We didn’t have to use a crystal ball, but rather, as the largest storage company in the world, started to interview customers about their upcoming needs re: K8s.”
Those conversations bore fruit. In August 2021, Dell released the first of its Container Storage Modules (CSM). The first group of CSMs was designed to handle authorization, observability, replication, resiliency, and snapshots. Those CSMs are:
- Authorization: Gives Kubernetes administrators the ability to apply role-based access control (RBAC) and quota rules that instantly and automatically restrict a cluster tenant’s usage of storage resources.
- Observability: Provides a single pane management experience for Kubernetes/container administrators using Grafana and Prometheus dashboards.
- Replication: Makes array replication capabilities possible for Kubernetes users, extending data protection and disaster recovery planning to Kubernetes workloads.
- Resiliency: Protects against node failures by enabling K8s node failover. This module keeps track of persistent volume health, detects node failures (power failure), K8s control plane network failures, and array I/O network failures, and gracefully migrates the protected pods to correctly functioning hardware.
- Snapshots: Builds on top of CSI-based snapshots for operational recovery and data repurposing. In addition to point-in-time recovery, the snapshots are writable; they can be mounted for test and dev and analytics use cases, with no impact on production volumes.
The storage modules build on Dell’s decades of knowledge of data storage and give developers the ability to tap into automated storage infrastructure — which can speed up their development productivity overall.
These modules can be deployed on all of Dell’s primary storage arrays, including Dell PowerFlex, Dell PowerStore, Dell PowerScale, Dell PowerMax, and Dell Unity XT.
New Encryption and App Mobility Tools
In late September, Dell added two more modules to its CSM portfolio: app mobility and encryption, both currently in tech preview. As is the case with the previously available CSMs, the new modules are open source.

The current lineup of Dell CSMs; app mobility and encryption are in tech preview*, while volume placement is still in the development stage**. (Image courtesy of Dell Technologies.)
The app mobility module solves a particularly pressing issue for Dell’s customers.
“We live in a multi-cloud world,” said Iyer. “We have applications that can run in multiple places, and sometimes I may need to move the applications for various reasons. Maybe I have a test dev environment, now I can graduate it to production. And if I need, for example, to move workloads from one cloud to another, or maybe from edge to data center and back to edge, CSM App Mobility helps with that.”
“The app mobility aspect is basically enabling me to take the entire application as a whole, and then move it to another place and start it there.”
The new module lets K8s administrators clone their stateful application workloads and application data to other clusters, either on-premises or in the cloud. It uses the Velero backup tool, and its integration of the data mover Restic to copy both application metadata and data to object storage. It supports not only the Kubernetes container orchestrator but also Red Hat OpenShift, RHEL, and CentOS.
The new app mobility module can help backup and restore apps, clone them, or change PV/PVC storage classes.
“Data being fluid is very important, along with the application context,” Iyer said. “We’re just adding the application context on top of it, to allow the application to move with the data.”
Outside of app mobility, Dell’s storage products, Iyer said, “support low encryption right out of the box.” But the new CSM Encryption module, which encrypts data both at rest and in motion, goes a step further in keeping storage safe.
“Security is a holistic piece,” he said. “We have authentication modules, for example, making sure the right personnel or the right teams are basically being able to access it. We also have the encryption aspect, using tools like Vault by HashiCorp. We can store the encryption keys, encrypt the data, and basically ensure information is stored in the right format that we need.”
The orchestration and operating systems supported by the secure module include Kubernetes container orchestrator but also RHEL, SLES, and Ubuntu.
Getting Started and Staying Updated
Several of Dell’s customers are moving to containerized microservices, Iyer said, and are in varying phases of adoption for the new CSMs. As a last piece of guidance, he advises organizations that want to experiment with the modules to start slowly, with a proof of concept.
Dell encourages back-and-forth conversations with customers to help continually improve CSM. “They give us feedback, and we do work in an agile fashion,” Iyer said.
“We have sprints every two weeks, and again, it’s open source. So it’s a very, very fast, innovative process, allowing us to be flexible when adopting, accepting, and incorporating feedback. At the end of the day, the goal is to make what Dell is creating better for the whole developer community.”