How the Service Mesh Redefines Cloud Native Computing
The pending move to cloud native computing has brought with it obvious changes and shifts in how DevOps’ manage application deployments. Key among the challenges are observability and monitoring, logging, routing and, of course, security. As a way to keep this all under control, service meshes are increasingly seen not only as extremely useful extra layers to have — but as a necessity production pipelines, deployments and operations running on microservices and Kubernetes live and die by.
This was the main theme of this week’s The New Stack Analysts podcast hosted by Alex Williams, founder and editor-in-chief of The New Stack and co-hosted by Sriram Subramanian, founder and principal at CloudDon. They were joined Instana‘s Mirko Novakovic, CEO and co-founder, and Michele Mancioppi, senior technical product manager. Instana offers microservices-focused application performance management software and services.
The ability to manage and monitor Kubernetes, of course, were a core part of the conversation. Originally, in 2015 when the need emerged for APM monitoring solutions for microservices and containers as applications, “things like Kubernetes were not that popular yet,” Novakovic said.
“What happened is people moved, I would say, up the spec and they saw that an orchestration tool like Kubernetes is really important to them. And today, 70% of our customers are using Kubernetes out of two hundred fifty customers,” Novakovic said. “So, most of them are moving to an orchestration layer where you already have things like Envoy or Nginx as a proxy and I think what we also see right now is the next layer with other service meshes.”
Organizations are also moving “to a layer higher and are extracting the services they need like traffic routing to a layer above the orchestration layer, which is the service mesh such as Istio,” Novakovic said.
Service meshes are also increasingly used for multiple tasks, such as supporting deployment strategies with AB testing or routing traffic only partly between services, Novakovic said. “That’s why we implemented the tracing through these proxies because we got the demand of our customers that they need the visibility end to end,” Novakovic said. “Not only into the microservice but also understanding what these complex infrastructures like service meshes and Kubernetes orchestration layers are doing so if there’s an issue, they can see it inside of one complete trace.”
Support of a wide range of proxies, of course, is critical. Mancioppi described how Instana supports the “established proxies,” such Apache’s proxy, which are largely understood and adopted, Mancioppi said. Envoy and Nginx are the ones we are starting with,” Mancioppi said. Indeed, Mancioppi said Envoy is “the poster child in this Renaissance of reverse proxies,” Mancioppi said. While these proxies are “by far not the only ones” Instana supports, Mancioppi said a range of other alternatives are also supported, such as Traffic. “More are also likely to come in the future,” Mancioppi said.
Given the potential to increase computing loads, monitoring is, of course, critical. “What we are seeing today is that you need a technology that at very low additional resources and also basically zero latency for observing and tracing,” Novakovic said.
In this Edition:
3:24: So, Instana just announced monitoring and tracing capabilities for NGINX and Envoy application proxies, how does that relate to these mushroom complexities?
5:55: Exploring latency, data planes, and how Instana is handling that
10:48: Are those the two proxies you are servicing now? Or are you also including HAProxy in some manner?
18:46: What is it that you’re finding here at this intersection of the known and the unknown?
22:41: So as an application developer, what are the hooks I can have when I’m trying to enable tracing?
27:40: What are some of the patterns you’re starting to see in this work?
Feature image via Pixabay.