This week and next, The News Stack will be running a series of posts on the value that a service mesh brings to Kubernetes deployments. Here is the first installment. Check back often for more updates.
As we explore all the tools and additional infrastructure layers that complement Kubernetes, it’s important to remember: None of this is to imply that Kubernetes is lacking. Kubernetes is a powerful tool to dramatically simplify running containerized applications, but there are many things that it was simply never intended to do. Service meshes are an example of a complementary piece of the infrastructure, handling things that Kubernetes can not and was never intended to do.
“The Kubernetes team at Google and the Istio team at Google were neighbors and were discussing these things,” explained Varun Talwar, CEO of Tetrate and one of the original creators of the Istio service mesh.
“There is a set of things a service mesh can do, but it’s not because Kubernetes sucks,” explained William Morgan, CEO of Buoyant, which offers commercial support for the open-source service mesh Linkerd. “It’s because Kubernetes is really good but it has a well-defined scope.”
A service mesh is generally made up of sidecar proxies that are attached to all the pods in an application, managing communication between the services — thus, the name. Service meshes are a way to handle functions that are outside of Kubernetes’ role — specifically security, observability and routing. There are other ways to accomplish all of these goals — usually in code and/or in libraries. Both of those techniques work in smaller organizations, but for organizations that need to get centralized control a service mesh is a way to ensure internal governance policies are enforced by the centralized platform team. “The real question is how complex is your system?” said Idit Levine, CEO of Solo.io. If it’s small and simple, a service mesh is unnecessary. If it’s large and complex — it’s crucial.
In fact, Matt Klein, creator of Envoy data plane, would encourage organizations to only use a service mesh if truly needed. “Simple is always better,” he said. “However, if an organization has decided to go ahead with a microservice architecture, there’s a number of challenges around observability and networking that service mesh can solve.”
Here’s how a service mesh works to help organizations get better security, observability and control over routing/load balancing by using a service mesh with Kubernetes.
Security isn’t in Kubernetes’ scope, and the default configurations can leave organizations vulnerable. Individual application developers can control things like encryption and access control through code and through manually managing configurations, but this relatively ad-hoc approach leaves organizations open to errors.
With a service mesh, it’s possible to ensure that encryption and granular access control rules are put into place organization-wide, in a way that can be centrally controlled.
“Where we fall short with just Kubernetes is the actual networking aspect of making sure all these things can actually communicate with each other at a rapid pace with very intricate policies and security procedures,” explained Jonathan Holmes, chief technology officer at Decipher Technology Studios, the company behind greymatter.io, a security-focused service mesh. Because while companies need to have security baked-in, it has to be in a way that works with distributed, ephemeral architectures, ideally without slowing down the development or deployment process.
Service meshes make it possible to control and encrypt east-west traffic, or traffic between services inside a cluster. An API Gateway, a technology service meshes are often compared to, only controls north-south traffic. The ability to control both east-west and north-south traffic creates a better security posture than just controlling one or the other.
“About 60% of our customers would say their primary reason for using a service mesh is security,” Levine said. “The other 40% would say observability.”
“By default, Kubernetes would give you the health of your pod and the CPU memory utilization of your pods and nodes, which is good for infrastructure people to know,” explained Talwar. “Yes, Kubernetes was installed successfully and is not consuming too many resources. But it doesn’t tell the customer who deployed their app, how is that doing? And that is the primary thing they are looking for.”
A service mesh is a way to get better information about what is happening on the application level. “When a request is coming to your cluster, you don’t really know what happened,” Levine said. “Every outbreak is a murder mystery series. People do not know where to start to look for where the problem is.”
Kubernetes on its own will provide visibility into layer three and four, but a service mesh brings visibility to the application layer, so that organizations have a way to get information not just into the health of Kubernetes but into the health of each service and the health of the overall application.
Security and observability are probably the most common reasons organizations implement a service mesh, but it can also help control load balancing and routing. Vanilla Kubernetes ingress controllers will manage L3 and L4 load balancing, but without a service mesh, the responsibility for managing layer seven routing and load balancing will be on the application developer.
Like observability and security, it’s possible to pack routing information into the code, but a service mesh takes it out of the service and makes central administration and organizational governance policies possible.
A service mesh makes it possible to make fleet-level changes to security, observability or routing rules. Anything that’s controlled by the side-car proxy can be easily changed for thousands of services by a central team, whereas the same changes would be essentially impossible to make on the same scale if routing, security and visibility were controlled with code. There’s no way to make those changes, fleet-wide, using Kubernetes alone.
The other thing to remember is that Kubernetes essentially makes a service mesh possible. Kubernetes makes it very easy to deploy the sidecar proxies that service meshes depend on. “We could have done this in the olden days of Chef and Puppet,” explained Morgan. “We could have said, ‘I’m going to deploy 10,000 proxies.’ But it would have been crazy. With Kubernetes, it’s very easy to just say, hey, stick this sidecar next to every pod.”