What’s Next for Service Meshes

KubeCon + CloudNativeCon sponsored this post.
What’s Next for Service Meshes
Service meshes have emerged as a quintessential must-have for microservices and Kubernetes management. Given the enormous complexity of these environments, service mesh offers the management and observability required for cluster visibility and tracing, among other things.
Among several open source options, Linkerd has emerged as a leading service mesh offering. But as William Morgan, CEO for Buoyant, which manages the Linkerd project, service meshes are more about solving a people problem when attempting to coordinate and manage different developers’ work and collaborations.
“I was an engineer in the previous life and a lot of the service mesh conversations that we have tend to be very engineering conversations around like well, the feature set and when you’re going to support x and y. And in reality, what we found is that Linkerd is very good at solving what is actually a human problem, which is, especially as the company as a company or an organization grows, you have lots of people trying to do things at the same time,” Morgan said. “What Linkerd allows you to do is make the lives of developers easier, especially developers also platform owners, by moving a lot of the functionality that they would otherwise be responsible for down to the platform layer where they don’t have to worry about it.”
The role of Linkerd, as well as service meshes in general, and the key role they will play as computing becomes more cloud native-centric operations were discussed during this latest edition of The New Stack Makers podcast with Morgan, which was hosted by Alex Williams, founder and editor in chief of The New Stack, along with Tom Petrocelli, research fellow at Amalgam Insights. How service meshes are expected to evolve in the near future was also discussed during this podcast, recorded at KubeCon + CloudNativeCon in Barcelona.
Morgan described how an engineering manager leading teams can find managing distributed workforces can be as challenging as managing teams onsite.
“I don’t think it really matters where teams are located geographically — it’s more about how you are operating with them and are trying to accomplish some kind of shared purpose, such as shipping some new code,” Morgan said. “But we’re also all working on independent parts and we want to move as rapidly as possible and we also have these kind of coordination points where we do have to accomplish something.”
On a more technical level, the structure and interface service meshes offer are also particularly conducive for microservices and Kubernetes platforms.
“One of the things that has happened because of this move towards microservices, in particular clusters, is that centralized resources, like centralized networking resources, are just not designed for that environment. They’re not designed to very highly distributed environments,” Petrocelli said. “So, we’ve started to move more and more into the actual say Kubernetes pod, or into microservices itself. So, as you know, that MIT for a long time that you had to build that into your code you had to write your microservice with networking code in it, and that’s not what software developers do — they want to write business logic.
So, what we’ve been able to do with service meshes is to create all those services without having to burden developers with the process, but at the same time, we’re using a more appropriate way of doing that kind of networking, than say what centralized appliances do.”
In this Edition:
3:01: Distributed workforces.
9:45: Priorities for distributed systems.
12:41: How governance is critical to the success of the organization.
16:58: The evolution of the conversation surrounding service mesh.
22:47: Keeping track of the service mesh landscape.
29:41: The democratization issue of security.