How Portworx Solves the Problem of Running Stateful Workloads in Containers
When containers were introduced, they were not ready to run stateful workloads such as databases and content management systems. Docker eventually added abilities like volumes that attempted to solve the problem by exposing the filesystem of the host to the container.
The challenges of running stateful workloads multiply when you start using a container orchestrator such as Kubernetes. While it is extremely easy to scale stateless containers in Kubernetes, managing the uptime of a database is complex. Kubernetes attempted to ease this through StatefulSets — a purpose-built controller that brings the ease of a ReplicaSet to a stateful workload. StatefulSets contain Pods that follow a well-defined, predictable naming convention for the service discovery. It is easy to run a master/slave setup as a Kubernetes StatefulSet. This opened up doors to port legacy database workloads to containers. Existing topologies such as master/slave, active/passive could be easily mapped to a Kubernetes StatefulSet.
Kubernetes StatefulSets primarily focuses on the service discovery and health of the Pods. For example, if a MySQL node running in a cluster dies, the Kubernetes controller can bring back the failed Pod with the exact same name as the failed Pod. The master rediscovers the node and starts replicating the data.
What Kubernetes StatefulSets doesn’t address is the availability of the underlying storage. While the StatefulSet controller tackles the scheduling problem of the compute, it leaves the storage orchestration to the underlying engine.
Portworx is designed from the ground up to be a container-native, orchestration-aware storage fabric. It aggregates underlying storage and exposes it as a software-defined, programmable block device. It does what Amazon EBS does to Amazon EC2 instances, but for containers.
Initially, customers used GlusterFS or NFS as the distributed storage layer spanning all the nodes of a Kubernetes cluster. Since the storage is available on any node, a Pod can gain access to the data irrespective of where it gets scheduled. Though this topology makes data available across the cluster, it doesn’t meet the requirements of a database workload running in production with high throughput and IOPS requirement.
The other challenge is that the storage operations and storage administration tasks are done outside of Kubernetes. Legacy storage admins don’t understand the primitives of Kubernetes while cloud native developers and DevOps find it hard to deal with the legacy storage.
Portworx as a Container Native Storage Platform
Portworx is designed from the ground up to be a container-native, orchestration-aware storage fabric. It does what Amazon EBS does to Amazon EC2 instances, but for containers. It aggregates underlying storage and exposes it as a software-defined, programmable block device. Similar to EBS, container developers and operators don’t need to know how the physical storage is managed. They use a familiar workflow of defining a Persistent Volume Control (PVC) and consuming the Persistent Volumes (PV) within the Pods.
Deployed as a DaemonSet, Portworx installs itself into each node of the cluster. It discovers the available storage to create a container-native block storage device.
Portworx can aggregate existing SAN or cloud-specific block storage to expose it as container-native storage. For example, a Google Kubernetes Engine cluster running three nodes can expose an aggregate pool of storage coming from the persistent disks attached to each node. If each node has a 100GB SSD disk attached, Kubernetes will see 300GB raw storage that can be used with the PVCs and PVs.
One of the key benefits of using Portworx is the high availability of data. When a Storage Class is created for a workload, you can specify the replication factor which will define the redundancy of the dataset. Every block of data written to the volume will automatically get replicated to other nodes of the cluster. This mechanism maintains a consistent mirror image of data across the cluster. When a Pod is deleted and recreated by the Kubernetes controller, no matter which node it gets scheduled, the Pod always has access to the data. This architecture effectively decouples data from compute enabling stateful Pods to enjoy the same flexibility as the stateless Pods.
Last year, Portworx introduced a custom scheduler for Kubernetes in the form of STORK – STorage Orchestration Runtime for Kubernetes. This scheduler assists Kubernetes in placing the Pod in a Node where the associated PVC is already present. STORK frees DevOps teams from creating complex pairs of annotations and labels to achieve node affinity for Pods. With STORK, stateful workloads will always find their way to the right node even during a catastrophic failure.
Portworx effectively bridges the gap between traditional storage administration and cloud native DevOps. It provides native primitives for classic storage operations such as expanding volumes, taking backups, restoring volumes, migrating data, and even defining QoS policies for volumes.
Next time when you find yourself running a stateful workload in production, don’t rely on legacy distributed filesystems like NFS or GlusterFS. Evaluate Portworx for its consistent, container-native, orchestration-integrated approach of dealing with storage.
In one of the upcoming articles, I will demonstrate how to use Portworx to achieve data portability across different Kubernetes clusters.