Modal Title
Cloud Native Ecosystem / Microservices / Networking

Linkerd 2.0: The Service Mesh for Service Owners, Platform Architects, SREs

A close look a Linkerd 2.0, a revamped service mesh.
May 31st, 2019 3:00am by
Featued image for: Linkerd 2.0: The Service Mesh for Service Owners, Platform Architects, SREs

The rise of microservices resulted in new challenges and interesting solutions to address them. The service mesh is one of those technologies that has emerged as the core foundation of the microservices infrastructure. After Kubernetes, which does service orchestration, service mesh has become the most critical technology to manage microservices deployed in production environments.

The current service mesh ecosystem is dominated by three open source projects:

Due to its backing from Google, Red Hat, and IBM, Istio enjoys the maximum attention of developers. Linkerd is transitioning from its Java-based platform to a Kubernetes-optimized implementation. Consul Connect is designed to be a generic service-to-service connection authorization and discovery mechanism.

Let’s take a closer look at Linkerd.

Linkerd 1.0: A Service Mesh Technology That Was Ahead of Its Time

Buoyant, the company that manages Linkerd, released the initial version in 2017. Linkerd 1.0 was clearly ahead of the time. It attempted to solve most of the problems faced by current microservice developers and operators.

Linkerd 1.0 was built on top of Netty and Finagle, a production-tested RPC framework used by high-traffic companies like Twitter, Pinterest, Tumblr, PagerDuty, and others. It had strong roots in Java and JVM making it a bit heavier to use with containers and microservices. But as a network proxy, Linkerd 1.0 was rock solid delivering the capabilities such as service discovery, circuit breaking, distributed tracing, and transparent proxying. It even had a Prometheus and Grafana plugin, linkerd-viz, to visualize key metrics.

Linkerd 1.0 could be deployed in a per-host or per-service model. In per-host deployment, all the services running on the host/node will route the traffic through the Linkerd instance. The per-service model is based on the sidecar pattern which is now very common among the service mesh deployments. This choice of deployment made Linkerd 1.0 work in a variety of environments including bare metal, Amazon EC2, Docker, Kubernetes, and Mesosphere.

Unlike the current service mesh platforms, Linkerd 1.0 didn’t have the separation between the control plane and data plane. Each host/service running an instance Linkerd had all the components encapsulated in one deployment unit.

Linkerd was the first service mesh project to become a part of the Cloud Native Computing Foundation.

Enter Linkerd 2.0: The Ultra Lightweight, Modern Service Mesh

In September 2018, Buoyant announced the availability of Linkerd 2.0 — a service mesh written from the ground up for contemporary microservices. With a completely rewritten codebase, it had no legacy of its predecessor.

Linkerd 2.0 closely resembles Istio in its design and architecture. Like Istio, it has a cleanly separated control plane and data plane. The data plane consists of sidecar proxies living close to the service. The control plane runs in its own context managing the fleet of proxies running in the data plane.

The creators of Linkerd 2.0 chose Rust and Go as the choice of languages. The proxy is developed in Rust while the control plane is written in Go. This combination delivers the required performance with a tiny footprint.

Unlike its predecessor, the current version of Linkerd doesn’t spread itself too thin by attempting to support multiple environments. Instead, it just supports Kubernetes and is highly optimized for it. Support for other environments may be included in the future.

Why Choose Linkerd 2.0 Over Other Service Mesh Technologies?

Personally, I am a big fan of Istio. It was the first service mesh that I encountered and implemented in some of the projects. I also like the fact that it is becoming the foundation for many hybrid deployment scenarios including VM/containers, on-prem/cloud, and IaaS/CaaS. Some of the most exciting projects in the Kubernetes ecosystem like Knative are built on Istio.

Istio is a collection of independent technologies that work together to deliver the service mesh functionality. For example, Envoy exists as a standalone proxy that may be used outside of Istio’s context. Pilot, one of the core components of Istio control plane is responsible for converting Istio’s policy definitions to Envoy. Similarly, Mixer, the endpoint that talks to Envoy proxies are implemented as a separate component. This mix and match of Istio components make it modular but at the same time increases the complexity and manageability.

Linkerd 2.0 is inspired by Istio’s design. But in comparison, Linkerd is lightweight, easy to install, and scales much faster.

Linkerd 2.0 doesn’t have to deal with the heterogeneity that leads to complexity in Istio. Instead of Envoy, Linkerd 2.0 implements its own proxy that’s tightly integrated with the control plane. The control plane is minimalistic and focuses on the core aspects such as observability, security, and policies. The control plane embeds Prometheus and Grafana for tracking and collecting the key metrics.

Currently, Linkerd 2.3 includes telemetry, retries, timeouts, auto-inject, mTLS by default with zero configuration. The next release will have traffic shifting for implementing blue/green deployments, canary releases, support for routing policies, and mesh expansion. Features such as circuit breaking and distributed tracing are also in the pipeline.

The minimalistic approach of Linkerd 2.0 makes it an ideal service mesh choice of service owners, platform architects, and SREs. Some of the features such as live view of requests, in-built topology graph of services, and service profiles are my favorite.

For your next microservices project, give Linkerd 2.0 a chance. You may be impressed by the simplicity, performance, and the scale it delivers.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar on getting started with Google Coral Dev Kit and USB Accelerator.

The Cloud Native Computing Foundation is a sponsor of The New Stack.

Feature image by Hans Benn from Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.