Portworx sponsored The New Stack’s coverage of KubeCon+CloudNativeCon North America 2019.
Last week at KubeCon+CloudNativeCon North America 2019, the co-hosted Cloud Native Storage Day provides an update on the current state of the cloud native storage ecosystem. I had the honor of delivering the keynote that covered the evolution of the state in Kubernetes, the role of Container Storage Interface, and the roadmap.
Here is an abstract of what I presented at the CNS Day.
The State Is Always an Afterthought
For platform architects, persistence and state have always been an afterthought. Amazon EC2 became available in beta in 2006 which had no support durability and persistence. It took a good two years for Amazon to launch Elastic Block Storage (EBS) as the storage layer for EC2. The same was the case with Google App Engine, the industry’s first platform-as-a-service offering. Even Windows Azure, the original incarnation of Microsoft’s public cloud lacked support for persistence. Azure Disks were launched four years after the availability of the initial web role and worker role of Azure.
The cloud native universe is no different. Docker containers based on the union filesystem started as ephemeral units of deployment. Support for volumes came in a few years later after containers started to gain traction. The same was the case with Apache Mesos and Kubernetes.
But production systems need two critical capabilities among others — scalability and reliability. This forced the ecosystem to make statefulness as important as stateless. In the traditional hyperscale-based cloud environments, auto-scale of VMs handle the scalability. Amazon EC2’s launch config and auto-scale groups handle the scalability and availability of stateless workloads. Google Compute Engine and Azure VMS implement similar features through instance groups and VM scale sets respectively. Durability is often delegated to object storage, block storage, file systems, and key/value stores, and managed databases.
Kubernetes tackles scalability and availability of applications through controllers such as deployments and stateful sets. Implementing readiness probes, liveness probes and health checks ensure that the workloads are available while features such as horizontal pod autoscaling ensure scalability of apps. Durability in Kubernetes is managed through the concept of persistent volumes and persistent volume claims that act as the foundation of the state.
The takeaway from this discussion is that the state is always an afterthought for the platform architects while it’s the most critical requirement for customers.
The Evolution of State in Kubernetes
Early adopters of Kubernetes had to rely on storage primitives such as emptyDir, hostPath to add durability to applications. But running traditional databases in Kubernetes needed advanced capabilities that go beyond the primitives.
Customers turned to distributed file systems such as NFS, GlusterFS, and Ceph to add a persistence layer that cut across the nodes. Managing these filesystems was not integrated with Kubernetes tools and workflow. Storage administrators had to manually install and configure these file systems on every node before deploying workloads.
With the rise of managed Kubernetes, cloud providers exposed block storage through the storage classes and dynamic provisioning. Customers could attach EBS volumes, GCE persistent disks, Azure disks to Kubernetes worker nodes running in AWS, GCP, and Azure.
In the last two years, the cloud native ecosystem created a new category for container-native storage. Vendors such as Portworx, Red Hat, Robin, Diamanti, StorageOS, Kasten started to offer niche storage solutions that abstracted the underlying storage exposed through block storage, SAN, NAS, and direct-attached storage.
Today, enterprise customers have a wide range of choice in selecting the right storage solution for cloud native stateful workloads.
CSI as the Turning Point
Initially. the upstream Kubernetes had integrated volume plugins for a wide range of storage backends. It included everything from primitive to vendor-specific plugins. While this simplified the life of cluster administrators, it came with many side-effects.
The size of upstream Kubernetes distribution became bloated with over a dozen storage plugins. Any minor update or change to a plugin meant rebuilding and compiling the entire code. This slowed down the potential growth of the storage ecosystem.
The Container Storage Interface (CSI) was designed to address these challenges. Firstly, CSI provides an orchestrator-agnostic approach to dealing with storage. Docker, Mesos, Kubernetes, Cloud Foundry can use the same CSI driver to manage the lifecycle of storage volumes. Second, it decouples the implementation of storage drivers and the orchestration engine. This means that every vendor can create and manage their drivers independently without having to rebuild the entire source tree. Third, CSI promotes portability by enabling customers to easily switch from one storage implementation to the other with minimal configuration changes.
CSI for Kubernetes became generally available in January 2019 with Kubernetes v1.13 release.
Traditional storage vendors such as NetApp, HPE, VMware, Pure Storage and cloud providers such as Azure, AWS, GCP, IBM, Oracle, and pure-play container storage companies like Portworx and Robin are switching to CSI.
CSI is a major milestone in the journey of running stateful workloads on Kubernetes in production. It delivers confidence to customers in porting traditional applications to Kubernetes while maintaining portability.
The Road Ahead
The Storage Special Interest Group (SIG) within Cloud Native Computing Foundation is working towards bringing additional capabilities like volume snapshots, volume bindings, volume cloning, and local volumes. These features make it easy to perform storage operations in the native cloud native form by using existing toolchain and workflow.
Commercial vendors within the ecosystem are pushing the envelope by delivering advanced functionality such as workload migration, policy-driven security, application-aware backup and restore, business continuity, and disaster recovery.
The container-attached storage market is growing rapidly. In the coming years, it becomes one of the largest market segments within the cloud native ecosystem.
The Cloud Native Computing Foundation and KubeCon+CloudNativeCon North America 2019, Red Hat, Diamanti, Cloud Foundry, VMware, NetApp and Oracle are sponsors of The New Stack.