The Must-Haves for Making the Shift to Cloud Native
Twistlock sponsored this post.
The IT industry’s collective shift to cloud native remains, amid the hype, in the early stages. While among those organizations that have made the jump, the need to find the right tools to untangle the enormous complexity involved becomes immediately — and painfully — obvious. The emergence of service meshes to help make sense of it all are thus increasingly seen by DevOps teams as an essential tool, instead of just another tool on the long list of nice-to-have options on offer today for Kubernetes and microservices.
Often described as GitOps, another key enabler for cloud native is empowering developers with the ability to easily, quickly, and ideally, seamlessly make updates to code running on Kubernetes and microservices far to the left in the production pipeline.
Monitoring and observability throughout the process, are of course, essential.
These and other themes were discussed during a podcast Alex Williams, founder and editor-in-chief of The New Stack, recently hosted at KubeCon + CloudNativeCon in Seattle with Alexis Richardson, CEO of Weaveworks and Andrew Clay Shafer, senior director of technology at Pivotal.
The historical roots of service meshes, like the early beginnings of DevOps as a concept, can be traced back to well over five years ago, with Netflix serving as a now-famous example as an early adopter. The advent of service meshes began as part of “patterns that emerged in all of these campaigns… or for a better term cloud native, where they’re doing some kind of client-side load balancing,” Shafer said.
“And then similar to how you see the emergence of container scheduling, this is just taking and codifying that pattern in an API or specification that gives you a common way to configure this thing,” Shafer said.
The “dominating proxy,” at least for now, seems to be Envoy, while it “doesn’t necessarily specify how it should be,” Shafer said. “And then there’s also more and more of this sort of functionality that’s being put into this… while of some of it is service mesh, some of it is like the sidecar all the things.”
In the shift to the left in the product pipeline in the way described above also means the roles in Dev and Ops can be different. All changes to code are also done through the interaction between the Dev and the Ops teams, Richardson said.
“If you want to go fast, you need to empower the developers who make the changes to the apps to do that more often with their own ways of checking the tests passed and [making sure] everything is correct and good,” Richardson said. “So, that means that you need to take some of the operations folks out of the flow of delivery a little bit.”
In that way, the operation best practices “are being baked into the platform,” Shafer said. “It’s not like you advocate responsibility for those kinds of consideration — it’s just that you’re letting the software do the right thing for you.”
Another key factor in the process in the “the desire to go faster” is “separating off the human beings from getting in each other’s way,” Richardson said. “I think another very important factor is that these bigger companies tend not to throw anything away,” Richardson said. “It’s that they can turn into enterprises themselves so you need to have a service layer where you can roll out different versions of services that run alongside each other. And there are so many reasons you might want to do this.”
In this Edition:
1:39: The roots of service mesh technologies
5:59 Would it be accurate to say we’re seeing more application-centric approaches?
12:02: How are you seeing your customers starting to use service mesh technologies?
17:58: How do you think about the security then, in a context like that where it’s hopping around so much?
20:01: What you mean by “GitOps.”
27:06: What are the things you’re thinking about going forward and trying to clarify about service meshes?
The Cloud Native Computing Foundation, KubeCon + CloudNativeCon, Pivotal and Twistlock are sponsors of The New Stack.