Choosing Between Container-Native and Container-Ready Storage
Containers have revolutionized the way developers create and deliver applications. The impact is huge; we’ve had to re-imagine how to store, protect, deliver and manage data. Software developers have a significant opportunity to meet the challenges posed by disaster recovery (DR), data protection, recovery point objective (RPO) granularity, data gravity and other data storage and management issues.
Kubernetes and other container environments see all resources as services. This means a storage-services layer is required, although perhaps it would be better to call this a data-service layer because this services layer must provide far more resource capability than simply storing data.
Into this discussion about the new container data-services layer come two main approaches: container-native and container-ready. Given the mixed messaging and differing opinions in the market, the differences between the two is somewhat unclear. However, even a high-level comparison belies critical distinctions. The two approaches each offer different things, and it’s important to evaluate which solution makes the most sense for your enterprise’s needs.
CNS vs. CAS
The container-attached (CAS) or container-ready approach is attractive because it uses existing traditional storage — typically external arrays — attached to the Kubernetes environment using software shims. Thus it promises the ability to reuse existing investments in storage and may make sense as an initial bridge to the container environment. It can be effective for those experimenting with containerization, planning to do so at a smaller scale or as an adjunct to traditional monolithic IT environments.
What’s different about container-native storage (CNS) is that it is built for the Kubernetes environment. CNS is a software-defined storage solution that itself runs in containers on a Kubernetes cluster. Kubernetes is designed for container orchestration. Everything inside of Kubernetes is a resource, and every resource is managed and orchestrated by Kubernetes.
Kubernetes spins up more storage capacity, connectivity services and compute services as additional resources are required. It copies and distributes application instances. If anything breaks, Kubernetes restarts somewhere else. As the orchestration layer, Kubernetes generally keeps things running smoothly. CNS is built to be orchestrated, but container-ready or container-attached storage isn’t easily orchestrated.
Kubernetes offers agility, reduced operational complexity and lower cost, but container-attached storage adds friction that limits the full benefits of Kubernetes. Keeping with the Kubernetes seafaring theme, attaching traditional storage to Kubernetes environments is like you’ve thrown an anchor overboard. The rationale for adopting Kubernetes is the desire/need for unified, simple, self-orchestrated IT. The separate storage and data management required for container-attached approaches can’t scale, can’t adapt and can’t respond at the speed required for Kubernetes. It might work acceptably in a small cluster of two or three nodes, but as soon as you start to scale, you’ll realize the limitations.
What’s possibly a more significant point is that traditional approaches separate primary and secondary data. Primary is live, present-time data. Secondary is protected data (backup or snapshots), copies of data as it existed at a single point in time in the past. This entire concept is upended in the Kubernetes world. The concept of secondary data in Kubernetes is reminiscent of Henry Ford’s “faster horse” analogy. In implementing containers, we are rethinking IT entirely from the ground up. It makes no sense to drag old methods of data protection, like backup and snapshots, into this new world.
A Radical Shift
The storage time-space continuum gets a rewrite with Kubernetes-native storage, which offers a unified data-services layer that provides high-speed, agile “primary” storage – live data as it exists in the present. It does this just as easily and fluidly as it provides “secondary” storage (instantly accessible clones of live data as it existed at any previous point in time.) The entire concept of data being here or there, current or previous, evaporates. The container-native storage layer offers instant access to data anywhere and from any time. Kubernetes is free to orchestrate what is needed and when.
It may seem difficult to hold this new paradigm in your head, but it may help to imagine how difficult it would be for a 19th-century farmer to grasp the idea of a combine harvester. But don’t let a hard paradigm shift leave you stuck in the past. Faced with disruptive technology, legacy vendors have long resorted to a common playbook: Give a nod to the innovation, pile on FUD (fear, uncertainty and doubt) to delay adoption, then promise to offer the innovation once it has been fully tested and proven.
That model once successfully stymied innovation while protecting the revenue streams of established vendors. Thankfully, today’s cloud-first, hack-everything DevOps approach thwarts that ruse, and container-native is now largely accepted as the way forward.
What’s Your Use Case?
Organizations have many storage options today, and they need more storage than ever. With containers added to the mix, the decisions can become harder to make. Which approach will best serve your use case? You need to understand the difference between container-attached and container-native storage to answer this question. For new applications, Kubernetes is ideal because it makes orchestrating changes across containers a breeze. Carefully consider your needs and your management capabilities, and choose wisely.
To learn more about cloud native storage and other cloud native technologies, consider coming to KubeCon+CloudNativeCon North America 2021 on Oct. 11-15.