Modal Title
Cloud Native Ecosystem / Security / Service Mesh

Linkerd Kubernetes Service Fabric Builds in Security

Oct 6th, 2021 10:13am by
Featued image for: Linkerd Kubernetes Service Fabric Builds in Security

We all know network security is vital to our Kubernetes deployments, right? Of course, right. A service mesh improves network security by adding a dedicated infrastructure layer to facilitate service-to-service communications between microservices and balances inter-service traffic. It also helps you intelligently control the flow of traffic and API calls between services. Now, in Linkerd 2.11, open-source Linkerd has baked in a new authorization policy feature to give you fine-grained control of which services can communicate with each other.

Before diving into that, let’s talk a bit about Linkerd. It was the first of the Kubernetes service meshes and the Cloud Native Computing Foundation service mesh. William Morgan one of Linkerd’s creators would like to remind you that — despite what you’ve heard to the contrary — service meshes are “architecturally pretty straightforward. It’s nothing more than a bunch of userspace proxies” and some management APIs. That’s it. That’s all.

The proxies make up the service mesh’s data plane, while the management processes act as its control plane. The proxies themselves are just Layer 7-aware TCP proxies, such as Envoy, HAProxy, and NGINX. Linkerd uses its own Linkered-specific Rust-based micro-proxy, Linkerd-proxy.

In 2.11, Linkerd’s developers added a new authentication and security feature to the mesh called “policy.” This feature gives you precise control over which services can communicate with each other. Simple right?

Here’s how it works. These policies are built on top of Linkerd’s automatic built-in Mutual TLS (mTLS)‘s secure service identities. MTLS automatically authenticates and encrypts all internode networking. The authorization policies are expressed in a composable, Kubernetes-native way with a minimum of configuration but it can express many behaviors.

This new policy-controller is written in Rust. It uses kube-rs to communicate with the Kubernetes API and it exposes a gRPC API implemented with Tonic. Before this, Linkerd has used Go for its control plane components. Oliver Gould, CTO of Buoyant, the company behind Linkerd, explained this was “because the Kubernetes ecosystem (and its API clients, etc) were so heavily tilted to Go. Thanks to u/clux‘s excellent work on kube-rs, it’s now feasible to implement controllers in Rust. This is a big step forward for the Linkerd project and we plan to use Rust more heavily throughout the project moving forward.”

Why is this a step forward? Gould explained, “Rust’s type system makes it much harder to write buggy code at the cost of being slower to compile. It takes some getting used to, but once you’ve gotten used to it it’s hard to go back to Go. Also, our Rust controller uses only 20-30% of the memory footprint of our Go controllers.”

On top of this new policy controller, Linkerd 2.11 introduces default authorization policies that can be applied at the cluster, namespace, or pod level simply by setting a Kubernetes annotation.

These include:

  • all-authenticated (only allow requests from mTLS-validated services);
  • all-unauthenticated (allow all requests)
  • deny (deny all requests) … and more.

To make this more useful, Linkerd 2.11 also adds two new  CustomResourceDefinitions (CRDs): Server and ServerAuthorization. Together you can use these to set up fine-grained policies to be applied across arbitrary sets of pods. For example, a Server can select across all admin ports on all pods in a namespace, and a ServerAuthorization can allow health check connection from kubelet or mTLS connections for metrics collection.

Put it all together, and you can easily specify many network security policies for your cluster, from the foolish “all traffic is allowed” to “port 8080 on service Foo can only receive mTLS traffic from services using the Bar service account,” to many more. For further details on what you can do see Linkerd Authentication Policy.

Besides this, there are many other new useful features and improvements.

For example, to make its networking more robust until recently, for performance reasons, Linkerd had only allowed retries for body-less requests, e.g. HTTP GETs. Now, Linkerd can also retry failed requests with bodies, including gRPC requests, with a maximum body size of 64KB.

To avoid race conditions, Linkerd 2.11 now ensures, by default, that the linkerd2-proxy container is ready before any other containers in the pod are initialized. This is a workaround for Kubernetes’s sad lack of control over container startup ordering. Hopefully, you’ll no longer run into situations where application containers fail to connect because the proxy wasn’t ready yet.

Finally, as always, the Linkerd 2.11 team wants to keep Linkerd the lightest, fastest possible service mesh for Kubernetes. To ensure that’s still the case:

  • The control plane is down to just three deployments.
  • Linkerd’s data plane “micro-proxy” is even smaller and faster thanks to the highly active Rust networking ecosystem.
  • Kubernetes’ Service Mesh Interface (SMI) features have largely been removed from the core control plane and moved to an extension.
  • Linkerd images now use minimal “distroless” base images.

If the only thing the Linkerd crew had added was the new authorization policy that would be reason alone for me to seriously consider using Linkerd. Add in the other improvements, and I think if you find yourself needing a service mesh for your cloud native deployments, Linkerd should be your first choice.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Tonic.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.