Aspen Mesh sponsored this podcast as part of a series of interviews that discuss how service meshes help DevOps. Listen to the other parts about how Istio is built to boost engineering efficiency and the third part here.
The adoption of a service mesh is increasingly seen as an essential building block for any organization that has opted to make the shift to a Kubernetes platform. As a service mesh offers observability, connectivity and security checks for microservices management, the underlying capabilities — and development — of Istio is a critical component in its operation, and eventually, standardization.
In the second of The New Stack Makers three-part podcast series featuring Aspen Mesh, correspondent B. Cameron Gain opens the discussion about what service mesh really does and how it is a technology pattern for use with Kubernetes. Joining in the conversation were Zack Butcher, founding engineer, Tetrate and Andrew Jenkins, Aspen Mesh co-founder and CTO, Aspen Mesh. We also cover how service mesh, and especially Istio, helps teams get more out of containers and Kubernetes across the whole application life cycle.
A service mesh helps organizations migrate to cloud native environments by serving as a way to bridge the management gap between on-premises data center deployments to containerized-cloud environments in cloud environments. Once implemented, a service mesh should, if functioning properly, reduce much of the enormous complexity of this process. In fact, for many DevOps team members, the switch to a cloud native environment and Kubernetes cannot be done without service mesh.
In a typical environment split between on-premises servers and multicloud deployments, a service mesh provides the “common substrate,” by enabling “communication of those components that need to communicate across these different environments,” Butcher said.
“That’s where the identity and security aspects of investment [involve] enforcement of an organization’s regulatory controls in place,” he continued. “All of my environments that are consistent and [those] that I can prove to an auditor are consistent are enforced across all of these environments.”
“The centralized control and consistency that service mesh gives you is incredibly useful for helping bring sanity to the kind of craziness that is this split infrastructure world, this kind of multicloud, on-premises world,” said Butcher.
Ultimately, organizations are latching on to service meshes as an answer for “not just a deployment problem,” but as a way to “integrate all the pieces together” during a cloud native journey, explained Jenkins.
“There is an end-state goal that you want to have, by unlocking developer efficiency by having developers be able to move fast on smaller components that are all stitched up into an integrated experience for users — but you have to get there from here from wherever you are,” Jenkins said. “And so we find that organizations use service mesh a lot to help out with that evolutionary path. That involves taking where we are now, moving some pieces kind of into more of the cloud native model and developing new cloud native components, but without leaving behind everything that you’ve already done.”
At the same time, organizations are benefiting from how service mesh, as well as Istio, has matured. With the recent releases of 1.6.4 and Istio 1.6.3, for example, one of the more recent features is “really boring — and that’s good,” Jenkins said.
It is now easier, for example, to “circle back and flesh out requirements, making sure that we adopt organizational requirements, policies and things like that,” Jenkins explained. “So, that’s just a great example of kind of the maturity side on this to the other thing that’s been kind of developing over a couple releases and is getting more and more mature.”
The other main new feature in development is for “web assembly support,” as a way to extend Istio and especially the sidecar Envoy proxy in a “more portable and rapidly evolving way, rather than having to build some very low-level components in the system,” Jenkins said. “I think that’s going to be great because it will allow developers to extend kind of the capabilities to service mesh — but without all of that having to happen in this crowded core, where stability is an extremely important concern and that can be a natural drag on innovation. So this capability opens up the web assembly front that allows us to do both: stability and an open door for innovation.”
However, there are still some cases where a service mesh is not needed — despite the hype. In other words, service mesh is not the end-all solution for all DevOps. “I don’t think it’s honest to say, ‘hey, everybody absolutely must use this new thing,’” Jenkins said. “There are actually problems where you don’t need Kubernetes and you may not need containers at all or if you look at serverless, for example.”
As organizations ponder what to adopt to facilitate their software development and deployment goals, there are hundreds of open source tools and solutions from which to choose today. “There’s always this continuum of what pieces you need,” Jenkins said. “And it’s definitely the case that all problems are solved by a service mesh and require a service mesh.”
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: [email protected].