How to Find the Less Painful Path for Kubernetes Infrastructure
Dell Technologies sponsored this podcast.
Kubernetes’ arrangement of container clusters and pods is one of the more amazing computing structures this writer has observed. Its simplicity as a container orchestrator, in many ways, begs the question of why such a straightforward system was so hard to invent in the first place. Regardless, in addition to the resource-savings capabilities Kubernetes offers, the hype about its versatility and scaling capabilities is also well-deserved.
But then your organization decides to make the cloud native shift to Kubernetes — suddenly, DevOps sees the very steep learning curve ahead as they face the often immense challenges of managing a Kubernetes infrastructure.
Boskey Savla, technical product line marketing manager, modern apps for VMware, believes DevOps teams, for example, begin to think about the daunting prospect of ensuring a particular stack is safe and secure on Kubernetes. “These are the things a lot of times customers start thinking about [when adopting] cloud native architectures and they tend to think about this as an afterthought,” Savla said. “And they go all-in on Kubernetes. But then they realize, ‘okay, we need to take care of all this. How do I even scale a cluster?'”
In this The New Stack Makers podcast, Savla and Chip Zoller, senior principal engineer for Dell Technologies, discuss infrastructure challenges associated with cloud native and Kubernetes and how the right tool choice can help to make the shift that much less painful.
While developers will notice the difference when their DevOps supporting infrastructure moves to cloud native, their work as software engineers — whether they are creating applications in Python, Go or any other programming language — will largely remain the same. Once they are past the learning curve, their work mainly goes on as it did before.
“Most of the time, developers don’t even understand where the underlying infrastructure is and how they manage it. They’re completely transparent to that process,” Savla said. “And what Kubernetes does is [that] it really talks to the backend infrastructure, and automates a lot of these tasks that the developer defines to make that magic happen. And so the infrastructure at that point in time has to be fluid and compatible, and Kubernetes itself needs to understand how to work with a specific infrastructure to make a lot of these things possible.”
But for the operations folks, a cloud native infrastructure is a much different animal compared to the developer’s experience, especially compared to when making the shift from a traditional datacenter structure. Complexities are compounded as organizations are not tasked with managing one, but often numerous Kubernetes clusters.
“And oftentimes what begins as a simple project turns very complicated because, now suddenly, just the upstream project is not going to help you… to ensure that a developer accessing the communities API is even authorized to do so,” Savla said. Each cluster, for example, has its own control policies the rules and policies of which must be defined, she explained. For security, access to open a port or creating a service must be allocated, for example, representing but one of numerous infrastructure management tasks to be taken into consideration.
For Day 2 production-ready Kubernetes needs for infrastructure, tools and platforms exist to help make the shift. Dell EMC’s VxRail, for example, was designed to help facilitate VMware cloud adoption for example. VxRail removes much of the complexity that is “inherent in running enterprise infrastructure,” Zoller said. After the Day 1 setup process when VxRail serves as a “wizard or is even encapsulated in infrastructure code,” during Day 2 “it makes updates and upgrades a trivial task in most cases, and so Kubernetes has been layered on top of that,” he said. VxRail can then take advantage of a lot of those capabilities through VMware “when it comes to things like storage and networking.”
With VMware vSphere, organizations can choose among various topologies for VxRail for Kubernetes deployments. “It comes down to the customer use case and simply how they want to deploy their VxRail rail, knowing that Kubernetes when deployed on top of that can just work with it,” Zoller said.
VMware is a sponsor of The New Stack.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: firstname.lastname@example.org.