NetApp sponsored this podcast, which is included in our forthcoming ebook “The State of State: New Approaches to Cloud Native Storage for Developers.”
Part of the transition to DevOps that comes with cloud native application development has been a shift in responsibility for storage, away from dedicated specialists towards developers who are increasingly responsible for provisioning the storage for the applications they build.
“People do not want storage to be a complicated task,” said Chris Merz, principal technologist at NetApp. “It is a piece of infrastructure. It should be simple, it should be scalable, it should be self-healing. They should follow the same patterns as the systems that DevOps practitioners and cloud native architects are building every day.”
Before Kubernetes, building and operating container-based applications was onerous — it involved manually handling tasks like DNS management, load balancing, scaling and resource monitoring. Now the Kubernetes ecosystem handles all of that — but there needs to be a way to get the same level of automation for storage, Merz said.
An Open Source Storage Orchestrator for Containers
Trident, an open source project developed and maintained by NetApp, acts as a storage orchestrator, abstracting away some of the complexities (and decision-making) from developers looking to provision storage. Developers don’t have to worry about the details of how the storage works — Trident integrates Kubernetes with NetApp’s on-premises and cloud-based storage products.
“You can kind of think of Trident as a storage concierge for containers and Kubernetes,” Merz said. You don’t have to worry about anything, the concierge just takes care of it.
Using a cloud native storage orchestrator that’s integrated into Kubernetes solves the most vexing problems of the application lifecycle. It makes persistence provisioning dramatically easier and faster, but also makes it easier to set up the persistence layer correctly for monitoring and observability as the application runs in production. This is important for any application, but even more so for stateful applications.
“Stateful applications tend to be systems of record or something that are more core to your application framework,” Merz said. Given that effective, enterprise-scale monitoring involves thousands of metrics, you need something that will automate the telemetry of your persistence layer every time a container with storage is provisioned.
Whatever the application architecture, Merz said, the challenges remain essentially the same: Scale and control. When we’re talking about an enterprise-size company, scalability is essential. Control, in the form of security, observability and data management, is also critical. It’s entirely possible to get the same scalability and control in a cloud native application that you have with enterprise-class storage. It just requires using different tools — and making sure the developers who are now in charge of provisioning storage have the knowledge and tools to make the right choices and set up the storage correctly.
In this Edition:
2:42: Merz’s take on the DevOps transformative journey
4:30: What have been the lessons you think about in this new world of containers and immutable infrastructure?
14:27: What are some of the points of view you have about teams, workflows, and CI/CD and how they fit together?
17:09: With developers increasingly having that operations role, does the workflow change that much for them at all, or is the change minimal?
19:15: How critical is that in thinking about designing stateful applications to run on containers and Kubernetes?
23:36: As observability is that alpha and omega of DevOps, and you start to see the outer reaches of what observability does offer, what are some of the challenges you see coming for your own infrastructure, for your own deployments, what are some of the tradeoffs you see coming?
Feature image via Pixabay.