Defining a Different Kubernetes User Interface for the Next Decade
If the 2010s were the decade of rapid mainstream Kubernetes adoption, the 2020s need to be the decade where Kubernetes management matures to offer a more satisfying, productive user experience. This is why it is so important to examine the Kubernetes User Experience (UX).
There’s no doubt that Kubernetes continues to be “hard to do” in many ways. Monitoring is difficult, magnified by quickly evolving environments and new pressures on infrastructure and the teams that manage and optimize them. Automation has provided relief but also introduced new challenges in guaranteeing availability requirements of applications that are becoming more resource-intensive.
What’s more, in 2020, we have kicked off a new decade with global uncertainty and drastically different table stakes when it comes to security, business continuity, and the ability of the tech team to deliver business results.
All this adds up to mean that the industry needs to immediately dedicate itself to reimagine the standard Kubernetes playbook. That process needs to prioritize first and foremost the real Kubernetes managers and developers who are shaping the future of container technology today.
The industry is starting to make strides in this direction. The creation of the Kubernetes Code of Conduct and Code of Conduct Committee has helped formalize standards and norms for this burgeoning community and increasingly influential technology. But this is just a start. It will take further alignment between individuals, companies, and the practitioner community to standardize Kubernetes practices for the coming years.
According to 451 Research, the market for application container technologies in 2022 is expected to grow to $4.3 billion. Redefining the Kubernetes user experience won’t just be a best practice — it will represent a big, billion-dollar business.
So, What Does a “Better” Kubernetes UX Look Like?
- It must deliver a consistent experience and set of tools whether the management is happening in an on-premises or cloud environment.
- It must focus on complex applications that are comprised of one or more components (like a NoSQL datastore, a messaging service, and a logging service) in addition to single-purpose as-a-service database clusters or messaging services that constitute the bulk of present-day Kubernetes deployments.
- It must enable users to avail of critical enterprise-class data management capabilities for running business-critical K8s applications and DevSecOps pipelines to address data protection, disaster recovery, app portability, migration, audit, retention, and governance use cases.
In the following sections, I will share how I think we as a Kubernetes community can come together to accomplish the above — but I want to also state in no uncertain terms: the challenge of better UX is not something any one person or company will be able to solve. We must continue to discuss and share amongst this community and challenge each other to elevate this technology for the years to come.
Deliver a Consistent Data Experience in any Environment
Most enterprises deploy and run Kubernetes in a diverse set of environments, including on-premises and major public clouds. Some companies pick a distribution of Kubernetes to run in all environments, while others run a mix of K8s distros including fully managed and self-managed based on business requirements, available K8s skillsets, and proximity of application teams to cloud provider regions.
Organizations who standardize on a K8s distribution usually do so for two reasons:
- To enable their developers and admins to abstract the environment on which their K8s clusters are hosted. Thus, they need to train staff to manage, maintain, operate, and administer one K8s platform, everywhere.
- To allow seamless migration and movement of stateful workloads across multiple environments where their clusters are hosted to enable use-cases that include cloud bursting, disaster recovery, and migration.
The two reasons above are not always mutually exclusive. However, the data layer underneath these commercial K8s distributions does not offer a consistent interface and in general lacks critical capabilities, making it hard for users to achieve the promise of multi-hybrid cloud portability for stateful workloads. Instead, users simply get locked into the environment where their data resides, invalidating one of the key architectural tenets of standardizing on a Kubernetes distribution for multiple environments.
The Kubernetes community has taken some initial steps to address these issues — as is evident in the efforts of SIG Storage — but much more needs to be done.
Focus on Applications, not Components
Many K8s applications are purpose-built for a single use-case, such as a Kafka cluster that provides messaging services, or a scale-out database-as-a-service cluster providing data stores. A data protection and DR strategy for such apps is usually implemented using built-in capabilities that these applications provide, and are very specific for the app, with minimal interaction and control from K8s. While this may work well for purpose-built, single-use apps, the lack of control from K8s makes it hard to manage the data life cycle of apps when they themselves become a part of a more complex application, which is often the case with enterprise applications.
Such complex enterprise apps may include a database or a NoSQL data store, a messaging service, a logging functionality, and homegrown business logic that glues everything together. For such apps, a component-by-component data protection and Disaster Recovery (DR) strategy that is largely outside the purview of Kubernetes does not work. The app and its data need to be protected holistically with data protection and DR mechanisms that work for all components that comprise the application. At the same time, the entire process needs to be controlled and overseen by Kubernetes.
The broader K8s community needs to converge on and standardize a normalized set of “actions” which have app-specific semantics and triggers for situations such as when a freeze/unfreeze (prior/after doing a snapshot) is requested. An app-specific response for such normalized actions can then be implemented by ISVs as well as enterprise application developers. This will enable tighter control and coordination with K8s so that the data life cycle of K8s applications can be managed by K8s in a unified and consistent manner.
The availability of such standardized actions and triggers will lead to a more consistent realization of Kubernetes data management for cloud native applications.
Data Protection, Recovery, and Governance
The Kubernetes community and ecosystem is still scratching the surface when it comes to delivering enterprise-class data management and services crucial for running business-critical applications and DevSecOps pipelines.
Enterprise companies expect data protection and disaster recovery for cloud native applications. The inception of the Data Protection WG in CNCF is a great development whose charter includes solving some of these problems but data governance for cloud native applications is another area where more focus is needed.
Kubernetes offers enormous flexibility to developers to run and create data anywhere. Such flexibility can sometimes lead to interesting challenges for organizations expected to be compliant with regulatory and data residency requirements. Having controls on where and how cloud native data is created and where it can be transferred will be something that will need further work.
As a community, we are headed in the right direction. However, it is important that we come together and continue to focus on improvements that will make K8s easier to work with and scalable for enterprises, regardless of size.