Service mesh is a fairly recent technology that has emerged as pressures mounted in the industry from the increasing use of Kubernetes and microservices in general. This architecture enables secure and observable communication between independent services.
Initially, applications broken down into smaller containerized services were structured to enable internal and external communication, both securely and safely. The problem with this approach is that it required application developers to add an entire networking stack to their apps to deal with issues such as service discovery, routing, circuit breaking, load-balancing, and security authorization.
The answer didn’t come in the form of a library that a developer could use because even smaller shops used multiple languages to build their apps.
What Is a Service Mesh?
The service mesh solved initial issues by abstracting components into a sidecar. Sidecars, which are utility containers that support main containers, can be attached to applications and communicate with all the other sidecars on a network. A service mesh is an infrastructure that layers transparently on applications and enables capabilities such as security, observability, and traffic management without having to attach them to codes. It routes data requests from one service to another and optimizes processing while executing tasks.
Here are some fundamental functions of a service mesh:
Connection. Services can discover and communicate with each other through a service mesh. The flow of traffic and API interactions between services can be controlled through intelligent routing.
Monitoring. A service mesh through monitoring tools such as Jaeger for Kubernetes and Prometheus can track and observe a distributed microservices system. Operators can discover dependencies between API latencies, traffic flow, and services. A service mesh can be vital in monitoring microservices.
Security. A service mesh ensures secure communication between services. A policy can be configured to allow or deny access from specified departments to certain services.
How Are Service Mesh Implementations Carried Out?
A typical service mesh can be divided into two parts: a data plane and a control plane. Here’s a brief distinction between both:
Data plane: The data plane deals with the actual traffic from one application to another. Any networking aspects regarding the actual service requests — such as routing, forwarding, load balancing, authentication, and authorization — are part of the service mesh data plane.
Control plane: The control plane is the entity that connects the various data planes into a distributed network. This is the policy and management layer of the service mesh.
What Is a Wifi Mesh Network?
A mesh network is a local network topology that dynamically and directly connects to other nodes and collaborates with each other to effectively send data between the network and clients. The independent nature of the mesh network enables each node to relay information.
A wifi mesh network is a connection of multiple components that leverage mesh technology to maintain smooth wifi performance. Routers connect directly to modems and a series of satellite nodes or modules to form part of the single network.
The Isito Service Mesh Package
Isito is one of the most popular service mesh packages. It is an open-source service mesh that settles transparently on existing shared applications. Isito provides a uniform way of monitoring, securing, and connecting to services.
The Istio package is in itself a control plane, though it uses Envoy as a data plane. Envoy is a proxy that is run alongside each service running on VMs or clusters. Data planes often program control planes. For example, the Isito service mesh considers desired configurations and programs Envoy, updating as it detects changes in the environment.
Other Service Mesh Solutions Are Coming Up
The Cloud Native Computing Foundation’s Linkerd, managed by Buoyant, is another popular service mesh. It was recently rewritten to move the codebase of the chunky Java language to the most nimble combination of Go and Rust. Linkerd 2.0 was also designed to work more smoothly with the Kubernetes container orchestration engine (though contrary to popular belief, a service mesh does not require Kubernetes to run).
In addition to new service mesh packages popping up, a number of network management software stacks have been extended to become full-fledged service mesh solutions, notably the Nginx application server, the Kong API gateway, and HashiCorp’s Consul. And because of the early success of the service mesh, there is a growing movement towards creating tools to manage multiple service meshes, including the Service Mesh Interface standardization effort and the Gloo software for service mesh federation.
The service mesh is a cloud-native technology, and we will be following its progress very closely at The New Stack. So bookmark this page for the latest trends and perspectives on this type of solution.