Last week, at the Cloud Native Computing Foundation’s KubeCon+CloudNativeCon 2017, much of the buzz seemed to be around service meshes, with everyone curious as to why they needed one now that they’ve installed the Kubernetes open source container orchestration engine. The topic brought a SRO audience to our pancake podcast panel discussion on the topic, which we captured for your listening pleasure on this latest episode of The New Stack Analysts podcast.
The event proved to be a popular one. People packed the room in order to hear more about the technology from our panel, which was moderated by TNS founder Alex Williams:
- Borys Pierov, National Center for Biotechnology Information, DevOps tech lead
- William Morgan, Buoyant, CEO
- Kris Nova, Heptio, advocacy boss
- Joab Jackson, The New Stack, news editor
On that very morning, Buoyant launched Conduit, its next-generation service mesh developed specifically for Kubernetes. It instantly joined the list of emerging service mesh contenders for this nascent market, alongside Lyft’s Envoy, Istio and Buoyant’s own Linkerd.
In short, a service mesh is a set of networking software for service-to-service communications. It follows the principles of service-oriented architecture (SOA), but instead of a centralized enterprise service bus (ESB), it utilizes a set of lightweight network proxies, attached to each service as “sidecars.” This allows the developer to not think about the supporting infrastructure and just develop the service in his or her favored language. And for the admin, a service mesh brings a lot of built-in capabilities such as rate-limiting, load balancing, a service discovery mechanism and a circuit-breaking capability.
The panel agreed that while it is obvious that as microservices-based applications are increasingly handled by Kubernetes, a service mesh of some sort will be required. Pierov recounted how his organization, NCBI, had been using service discovery tools for awhile, but they had difficulties in scaling as the number of services started to increase dramatically.
While Kubernetes offers some flexibility in L4 load-balancing, for instance, it can only load-balance entire connections, not individual requests. But while Kubernetes may require a service mesh, a good service mesh is not limited to running only with Kubernetes, Morgan pointed out. Indeed, a service mesh can also do application-level communication with virtual machines, serverless deployments, even bare-metal deployments. An organization steeped in VMs could use a service to build an SOA and then, over time, hop right over containers and proceed directly to serverless.
The advent of service meshes themselves owe a lot to the pioneering extensible architecture of Kubernetes, Nova pointed out.
“Ultimately (service mesh is) a software encapsulation of an advanced problem we’re solving. And really this is what Kubernetes is good at: Allowing users to encapsulate custom business logic, or in this case, custom network logic, into software and run it in a very flexible way,” Nova said. “So when I hear people talk about service mesh, back of my mind I’m always like, “Yay, Kubernetes is doing what we designed it to do from day one.”
In This Edition:
4:16: What is a service mesh?
12:21: The evolution of service mesh in the context of Kubernetes.
14:14: What information a service mesh can help capture.
20:24: How service mesh can help organizations still on legacy environments.
25:39: Conduit’s use of Rust for data plane security.
36:12: Conduit and distributed tracing.