Independent services can communicate securely and observably using this architecture. It is an infrastructure that layers security, observability, and traffic management capabilities on top of applications transparently without requiring coding changes. While executing tasks, it routes data requests between services and optimizes processing.
Service mesh is a fairly recent technology that has emerged as pressures mounted in the industry from the increasing use of Kubernetes and microservices. This architecture enables secure and observable communication between independent services.
Initially, applications broken down into smaller containerized services were structured to enable internal and external communication, both securely and safely. This approach required application developers to add an entire networking stack to their apps to deal with issues such as service discovery, routing, circuit breaking, load-balancing, and security authorization.
The answer didn’t come from a library that a developer could use because even smaller shops used multiple languages to build their apps.
The service mesh solved initial issues by abstracting components into a sidecar. Sidecars with utility containers that support main containers can be attached to applications and communicate with all the other sidecars on a network.
A service mesh is an infrastructure that layers transparently on applications and enables capabilities such as security, observability, and traffic management without attaching them to codes. It routes data requests from one service to another and optimizes processing while executing tasks.
Here are some fundamental functions of a service mesh:
Connection. Services can discover and communicate with each other through a service mesh. The flow of traffic and API interactions between services can be controlled through intelligent routing.
Monitoring. A service mesh through monitoring tools such as Jaeger for Kubernetes and Prometheus can track and observe a distributed microservices system. Operators can discover dependencies between API latencies, traffic flow, and services. A service mesh can be vital in monitoring microservices.
Security. A service mesh ensures secure communication between services. A policy can be configured to allow or deny access from specified departments to certain services.
Which Service Mesh Should I Use? Discover what to consider and how to implement the right service mesh.
A typical service mesh can be divided into a data plane and a control plane. Here’s a brief distinction between both:
Data plane: The data plane deals with the actual traffic from one application to another. Any networking aspects regarding the actual service requests — such as routing, forwarding, load balancing, authentication, and authorization — are part of the service mesh data plane.
Control plane: The control plane is the entity that connects the various data planes into a distributed network. This is the policy and management layer of the service mesh.
A mesh network is a local network topology that dynamically and directly connects to other nodes and collaborates to send data between the network and clients effectively. The independent nature of the mesh network enables each node to relay information.
A wifi mesh network connects multiple components that leverage mesh technology to maintain smooth wifi performance. Routers connect directly to modems and a series of satellite nodes or modules to form part of the single network.
Isito is one of the most popular service mesh packages. It is an open-source service mesh that settles transparently on existing shared applications. Isito provides a uniform way of monitoring, securing, and connecting to services.
The Istio package is a control plane, though it uses Envoy as a data plane. Envoy is a proxy running alongside each service running on VMs or clusters. Data planes often program control planes. For example, the Isito service meshconsiders desired configurations and programs Envoy, updating as it detects changes in the environment.
The Cloud Native Computing Foundation’s Linkerd, managed by Buoyant, is another popular service mesh. It was recently rewritten to move the codebase of the chunky Java language to the most nimble combination of Go and Rust. Linkerd 2.0 was also designed to work more smoothly with the Kubernetes container orchestration engine (though, contrary to popular belief, a service mesh does not require Kubernetes to run).
In addition to new service mesh packages popping up, many network management software stacks have been extended to become full-fledged service mesh solutions, notably the Nginx application server, the Kong API gateway, and HashiCorp’s Consul. And because of the early success of the service mesh, there is a growing movement toward creating tools to manage multiple service meshes, including the Service Mesh Interface standardization effort and the Gloo software for service mesh federation.
The service mesh is a cloud-native technology, and we follow its progress closely at The New Stack. So bookmark this page for the latest trends and perspectives on this type of solution.