Buoyant Advances Conduit Service Mesh with Prometheus Integration
Linkerd, an incubation project of the Cloud Native Computing Foundation (CNCF), provides a separate communication layer for cloud-native applications. It handles aspects such as service discovery, load balancing, failure handling, instrumentation, and routing for services, totally decoupling this communication layer from the application code.
At this week’s event in Copenhagen, there will be around seven talks by production users of Linkerd, including Edward Wilde, platform architect at payment processor Form3; Israel Sotomayor, infrastructure engineer at eCommerce API vendor Moltin; and Oliver Beattie, head of engineering at online bank Monzo. Representatives of online music platform SoundCloud, software vendor BigCommerce, social media monitoring company Brandwatch and others will be presenting lightning talks during the Linkerd Deep Dive session.
“On the Linkerd side, the snowball’s just rolling down the hill and picking up speed,” he said. Companies such as Salesforce, Expedia, Planet Labs and WePay have contributed code to the latest release.
“A few brave souls” also are using Conduit in production, he said, but aren’t ready to talk about it yet.
“Conduit is no longer in pre-pre-alpha. We’re now officially in alpha. It’s no longer just an experiment,” he said.
“Linkerd is really powerful, but it takes a while to really get your hands dirty with all the configuration options. We wanted to take this idea to the extreme: What if you had to do nothing? What if you just had to install it?” Morgan said.
The new release has a telemetry pipeline built from the ground up on top of Prometheus. It provides top-line service dashboards without requiring any configuration.
“Our focus has been not only to make it easy for you to understand your services running on Kubernetes, but to dig in and debug them when things go wrong. We have a bunch of tools in the Conduit ecosystem that allows you to inspect the top-line metrics on a per-service basis, what the request paths look like between services and into the requests themselves, without you having to do any kind of modification of your code or any substantial modification of how you’re running things on top of Kubernetes. So it doesn’t matter what protocol the application is speaking, what language it’s written in. If it’s running on Kubernetes, it can just give you all this power.”
One of the tools is the ability to slice and dice traffic, not just by what’s happening with this service, but by how this service is being called by other services. Then, conversely, how is this service calling other services? This is not the success rate overall, but the success per caller or callee. In microservices, that tends to be a hidden indicator of potentially problematic behavior, he said.
If a service has a 75 percent success rate, it tells you something has gone wrong, but it doesn’t tell you where, it doesn’t tell you whether it’s a problem in the service itself or one of the seven things it is calling. So by being able to break down the latencies and the request volumes and success rates out by dependent or dependee, you can start tracing the call chain without having to do anything like distributed tracing, which typically involves code modification.
The other tool will allow you to inspect requests: “Show me all requests as they happen live from service A to service B.” “Show me ones that are just returning errors.” “Show me ones that have a latency in this range.” You can use these building blocks to build the debugging and inspection toolkit you need.
Version 0.04.1 also includes preconfigured Grafana dashboards for every Kubernetes deployment and enhanced sub-millisecond p99 latency.
Buoyant is offering commercial support for both projects.
“The goal is not so much around technology, but how to make someone successful with microservices, adopting the cloud-native world,” Morgan said. “The technology itself is meaningless without the context around it, of people who understand it and know how to operate it and are building this ecosystem of usability around it.”
Feature image via Pixabay.