Refactoring and modernizing applications can lead to several challenges. Complexity increases the more you update apps for the modern world. Getting applications to run on container platforms, and getting them to talk to each other and connect, is a necessary step on the path to a modular, flexible microservices architecture. But the flexibility of microservices also introduces complexity. That’s where the service mesh comes into play.
Service meshes offer the centralized control plane that enterprises require, while still enabling the free-wheeling stylings of agile, cloud-based application development. Think of a service mesh as a specialized Layer 7 network for microservices APIs. It offers authentication, authorization, security, and performance services to optimize the “east/west” traffic running between services. More importantly, it gives you a central point to apply these policies rather than having to code all of this directly into the business logic of your applications.
A Simple Service Mesh Analogy
A service mesh is like a city’s network of water pipelines. Your team controls the pipes, connects them as desired, and sets all of their flow controls. Data can pass through your systems, no matter the type or purpose, regardless of the ever-changing needs of the applications supported by the service mesh.
This traffic management can be done in a central location, where rules can be constructed to manage those interconnected data flows. Like a giant control room in the sky, you can water land in California when crops need the extra resources, or you can drain Miami if it’s currently soaked. And best of all, these actions can be automated and performed dynamically.
Service Mesh is Your Ticket to Multi-Cloud
The service mesh is platform-independent, thanks to the fact that private and public cloud providers have settled on the de facto standard of Docker containers and Kubernetes orchestration. With these tools, building a service mesh in AWS does not preclude moving the system to Microsoft Azure, or forming a mesh within a vSphere private cloud.
This means your service mesh endpoints can be run in any container-based architecture, and systems can even be architected to run between clouds. Because service meshes track latency and performance metrics, this cross-cloud capability extends to cross-cloud service delivery.
Service Meshes Enhance Reliability and Visibility
Service meshes offer intelligent traffic routing that automatically recovers from network or service failures. This allows for full-stack problem tracing, and even for tracing interservice disruptions.
If a server is not responding, your service mesh will respond by culling it from a single service or an active, load-balanced pool of services, shunting it to another pool that is constantly checked for viability. When that server begins to respond in a reasonable time frame, it’s pushed back into the active load balancing pool automatically.
Service mesh data can be used to debug and optimize your systems by providing visibility into every facet of your service-level systems. That’s the microservices murky water problem, solved. Systems can be tweaked over time to expand capabilities and to address performance and stability needs.
Service Meshes Secure Inter-Service Communications
When your team rolls out a new version of an application, or moves a cluster for application hosting to a new datacenter, security teams generally need to reissue certificates and authorize new servers in the system. This can take time and effort, serving as a roadblock to pushing changes to production.
With a service mesh, the security around service-to-service communication is handled by the mesh, abstracting those concerns away from the application itself. The service mesh handles all of the restrictions on which services can talk to each other, which systems have access to which services, and which users can get through to which services. Thus, upgrading an application inside the mesh doesn’t require reallocation of security assets.
This also ensures your security concerns around the network and inter-service communications are independent of any of your internally developed business logic. If a security vulnerability arises in a network component, the service mesh can handle the changes around a security update, rather than re-architecting each application. This eliminates much of the downtime associated with security changes and updates.
Investigate Service Meshes for Large Microservices Environments
A service mesh comes with one (large) potential drawback. It adds additional containers. In fact, it doubles them. Most service mesh implementations use a sidecar proxy, coupling one proxy instance with each container-bound microservice. The benefits far outweigh the operating costs, but it means service meshes are often overkill for small environments.
If you’re managing dozens or even hundreds of discrete microservices, consider a service mesh. For these large environments, they are the final missing piece of the cloud application puzzle, and the one the ties your entire estate together – whether inside the public cloud, inside your enterprise data center, or in a hybrid cloud implementation. With a service mesh in place your team can trace problems, ensure service availability, and maintain proper distribution of your routing tables.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.