App developers have to continually look for ways to improve applications as more people and companies use them. One of the recent developments in this area is that of a service mesh. One of the most innovative service mesh products on the market today is one called Linkerd.
Buoyant is the company behind Linkerd, and it’s made improvements to make the service mesh even more appealing to businesses. For example, in September 2018, the company performed a total overhaul of Linkerd to make it faster and more geared toward developers or microservice owners than the past version. It is competing with a number of other service mesh packages in this emerging technology area, including the popular Istio. (Istio is often used in conjunction with the Envoy which provides the data plane capabilities to go with Istio’s control plane management).
“Linkerd’s strengths are that it is lighter, faster, and simpler than any other service mesh for Kubernetes,” asserted William Morgan, CEO of Buoyant and one of Linkerd’s creators. “We’re laser-focused on giving you all the power of the service mesh model without the complexity. Because of this, Linkerd’s momentum has never been higher than it is today.”
“Our users see the complexity brought in by something like Istio, compare it to the amount of complexity they are willing to add to their system (which is basically ‘as close to zero as possible’), and end up very happy with Linkerd,” he says.
Developers can deploy it for only one service or app if desired. Salesforce and PayPal are among the brands that got onboard with Linkerd and Buoyant in the early stages.
The World of Service Mesh
Before going into how a service mesh works, it’s necessary to discuss microservices. They represent a new kind of development involving building an app as a collection of small services.
An app developer builds a microservice to serve a unique and well-defined purpose. The microservice also runs in its own process. As such, it’s possible to deploy, scale or upgrade any microservice without disrupting all the other ones associated with it. They are typically automated, allowing live updates that don’t negatively affect the end users.
In short, a microservice provides a specialized service in a compact unit and delivers it at the enterprise level.
“A service mesh is a layer of infrastructure that operates on the communication between microservices,” explains Morgan. “By intelligently monitoring and manipulating this traffic, a service mesh gives platform owners critical features for understanding, controlling, and securing their microservices in production, without needing to touch application code. As organizations move towards cloud native architectures, a service mesh-like Linkerd is vital for ensuring they can scale this approach.”
The service mesh facilitates incoming and outgoing communications happening between the microservices. It’s a dedicated infrastructure layer that gives an application-wide point offering visibility into the runtime, as well as allowing control over it.
The service mesh handles communications between the microservices, which in turn provide a complete service to end users.
It’s Easy to Install
Developers who choose to use Linkerd can look forward to a quick and hassle-free deployment process. It works out of the box with most applications and does not have complex API or configuration specifics. Getting Linkerd’s control plane up and running is similarly straightforward. It installed in seconds into one namespace. Then, people can add microservices to the service mesh as needed.
There’s no need to change the code for the microservices. Also, data proxies are ultralight and super-fast. That means they give the visibility app developers want without causing slowdowns.
Access to Crucial Metrics
App developers know how important it is to monitor things happening within the back end of an app. Failing to stay abreast of those things could mean the app goes down without warning. One of the handy things about Linkerd is that the software provides actionable insights about each app within the service mesh. It gives people access to what Google calls golden metrics — including success rate and request volume.
Additionally, Linkerd provides a suite of deep runtime diagnostic tools. They include live traffic samples and automatic service dependency maps.
It Provides Additional Control
Linkerd came about when the team behind it gained experience operating large production systems at companies including Twitter and Microsoft. The individuals realized the most unusual and complex behavior in those environments came from the communications between the services rather than the services themselves.
Linkerd solves that problem by adding an abstract layer on top of the apps that take care of all the error-prone parts of cross-service communications. As a result of that enhanced control, the application’s code becomes more scalable and resilient. Since Linkerd runs as a standalone proxy, it does not need specific languages or libraries. Instead, users can choose whichever language is most suitable for their applications.
Because Linkerd separates communication mechanics from an application’s code, people can monitor or alter them without impacting the app itself simply using the software associated with the service mesh. Not surprisingly, a lot is going on under the hood with Linkerd.
It carries out double-duty while applying routing rules and load balancing over destination instances, as well as communicating with existing service discovery methods. It simultaneously assists with the aforementioned communications and reporting metrics.
Current Use Cases
How specifically do enterprises use Linkerd to meet their needs?
Recently, representatives from Attest, Nordstrom and Kairos participated in a live discussion about how they use Linkerd as part of an overarching application health strategy. It allows them to monitor Kubernetes applications and pick up on strange behavior well in advance of something catastrophic happening.
Also, Planet Labs depends on Linkerd to give cluster-wide visibility for a worldwide network of global satellites. It monitors latency and service failure, and then send pager notifications that triage events according to urgency. Linkerd makes it easier when service slowdowns or other issues happen. It can also better distribute resources by determining whether issues directly affect customers.
The examples here show how Linkerd service mesh gives developers a level of visibility that would otherwise be difficult or impossible to achieve. Thanks to that information, they can respond quickly when problems happen, or even spot characteristics and intervene before problems occur.
A Recent Substantial Investment
Although Buoyant already made significant improvements to Linkerd, it now has even more resources to do so in the coming months. That’s because GV — formerly Google Ventures — joined existing investors Benchmark and A Capital, and the three collectively contributed $10 million to further Linkerd’s development.
Those working on Linkerd at Buoyant say their active and vocal user community helps them focus on what matters most to users and deliver it. It’s undoubtedly that this new investment will allow more of those user-friendly enhancements to happen.
“It’s still the early days for the service mesh and there’s a ton of hype,” Morgan admits, “so it can be difficult to understand the landscape.” However, it seems that Linkerd will be one of the defining platforms that bring service mesh to the public eye.
Performance and Perception
Concerning Linkerd, “it has some differences from other service mesh products or projects,” says Tom Petrocelli, a research fellow at Amalgam Insights. “Some are good, some not so much, and some are neutral.”
On the positive side of things, Petrocelli notes that Linkerd includes a control plane and data plane.
“This ensures that any changes in either are coordinated with each other,” he explains. “This is an advantage compared to Istio/Envoy which are two distinct and separate projects that don’t have to coordinate. Linkerd is a complete package but so is NGINX.”
“Technically,” Petrocelli points out, “the Linkerd 1.x branch has some problems that inhibited its uptake in the market. It is tied to the Java VM, had a big footprint for a sidecar proxy (100-150mb per proxy) and high latency compared to other service mesh proxies.”
“The 2.x branch corrects these issues,” he says. “Unfortunately, the Linkerd reputation was determined by the 1.x branch and is often compared unfavorably to Envoy. Much of the negative view of Linkerd compared to rivals is a result of the version 1.x proxy. It’s not fair but that’s what it is.”
Petrocelli also pointed out that Linkerd is a Cloud Native Computing Foundation project. “This should be an advantage over Istio, which lacks true independent governance, and proprietary or open core technology. Linkerd, however, hasn’t seen much advantage from this association. Many more vendors who have taken up the Istio/Envoy projects. Envoy is also a CNCF project, which is a bit weird.”
“Altogether, Linkerd 2.x is technically on par with its rival projects and products,” Petrocelli sums up. “It suffers from lack of vendor support, especially compared to Istio/Envoy, with only Buoyant providing distributions. The reasons are more historical than technical but that’s the state of the market.”
Exciting Things Ahead for Buoyant
This overview presents some of the highlights that make Linkerd and Buoyant worth paying attention to in the months and years ahead.
Keeping apps up and running is crucial for helping companies achieve satisfaction among their uses. Due to the visibility and control it offers, Linkerd can help companies that have microservices.
The Cloud Native Computing Foundation is a sponsor of The New Stack.