Microservices have taken center stage in the software industry. Transitioning from a monolith to a microservices-based architecture empowers companies to deploy their application more frequently, reliably, independently, and with scale without any hassle. This doesn’t mean everything is green in Microservice architecture; there are some problems that need to be addressed, just like while designing distributed systems. This is where the “Service Mesh” concept is getting pretty popular.
We have been thinking about breaking big monolithic applications into smaller applications for quite some time to ease software development and deployment. This chart below, borrowed from Burr Sutter’s talk titled “9 Steps to Awesome with Kubernetes,” explains Microservices evolution.
Image source: Burr Sutter at Devoxx
The introduction of the service mesh was mainly due to a perfect storm within the IT scene. When developers began developing distributed systems using a multi-language (polyglot) approach, they needed dynamic service discovery. Operations were required to handle the inevitable communication failures smoothly and enforce network policies. Platform teams started adopting container orchestration systems like Kubernetes and wanted to route traffic dynamically around the system using modern API-driven network proxies, such as Envoy.
What Is a Service Mesh?
Agreed, microservices CAN decrease the complexity of software development in organizations, but as the number of microservices within an organization rise from single-digit to numerous amounts, inter-service complexities can become daunting.
Hence, a service mesh is a suitable approach to manage and control how various parts of an application interact, communicate with each other and share data. A service mesh is an idea built as a dedicated infrastructure layer right into an app. This visible infrastructure layer helps to optimize communication and avoid downtime as and whenever the app grows.
Microservices pose challenges such as operational complexity, networking, communication between services, data consistency, and security. This is where service meshes come in handy and are specifically designed to address these challenges posed by microservices by offering a granular level of control over how services communicate with each other.
Service meshes offer:
- Service discovery
- Services networking
- Routing and traffic management
- Encryption and authentication/authorization
- Granular metrics and monitoring capabilities
- Rate limiting
- Circuit breaking
- Load balancing
- Distributed tracing
Below is the graph that represents the Google trend for the search term “service mesh.” As you can see, it is moving up and above.
How Does a Service Mesh Work?
A service mesh mainly consists of two essential components: a data plane and a control plane. Making fast, reliable, and secure service-to-service calls within a microservices architecture is what a service mesh strives to do. Although it is called “mesh of services,” it is more appropriate to say “mesh of proxies” that services can plug into and completely abstract the network away.
Image source: Glasnostic
In a typical service mesh, these proxies are infused inside each service deployment as a sidecar. Rather than calling services directly over the network, services call their local sidecar proxy, which handles the request on the service’s behalf, thus encapsulating the complexities of the service-to-service exchange. The interconnected set of sidecar proxies implements what is known as the data plane. The components of a service mesh that are employed to configure the proxies and gather metrics are collectively known as the service mesh control plane.
Service meshes are meant to resolve the multiple hurdles developers encounter while addressing to remote endpoints. In particular, service meshes help applications running on a container orchestration platform such as Kubernetes.
Service Mesh Products
A service mesh is a great problem solver when it comes to managing your cloud applications. If anybody runs applications in a microservices architecture, they are probably considered a good candidate for a service mesh. As the organization adopts a microservices architecture, the services tend to grow in number, and a service mesh allows you to declutter the enhanced complexity from a huge collection of microservices.
Some widely-used service mesh products include:
- Linkerd, released in 2016, and introducing this new category, is an open-source Cloud Native Computing Foundation incubating project primarily maintained and sponsored by Buoyant.
- Istio, released in May 2017, is an open-source project from Google, IBM, and Lyft.
- Consul Connect, released in November 2018, is an open-source software project stewarded by HashiCorp.
API Gateway vs. Service Mesh: Better Together?
While an API gateway can handle east-west traffic, a service mesh seems like a better fit here because a service mesh holds a proxy on both the sides of the connection.
Similarly, even though a service mesh can handle north-south traffic, an API gateway is regarded as a better fit for such an arrangement because one part of the connection is beyond the service mesh’s administration.
North-south traffic typically demands the supervision of the end user. An API gateway is much more focused on managing the end-user experience.
Image source: DZone
Companies can use an API gateway to offer APIs as a product to external or internal clients/users through a centralized ingress spot and to administer and regulate their exposure. This is generally used when complex applications need to talk to each other.
Image source: Kong
Service meshes can be used to build secure and reliable L4/L7 traffic connectivity between all the services that are running in our systems using a decentralized sidecar deployment pattern that can be adopted and implemented on every service. They are generally used to create point-to-point connectivity amongst all the services that pertain to the application.
Companies will often employ both a service mesh and an API gateway, and use them simultaneously to compliment each other. Learn more about service meshes at “Service Mesh Solutions: A Crash Course” by Melissa McKay.
Do You Really Need a Service Mesh?
The very generic and safe answer is, “it depends.”
It depends on the use case, the timing, how many microservices you are running, the cost, and careful consideration of cost vs benefit.
Service meshes enable software platforms to do a lot of heavy lifting of applications. They provide infrastructure standardization when it comes to security, scalability, observability, and traffic management challenges faced by developers and managed centrally.
If you are deploying your first, second, or third microservice, you probably don’t need a service mesh. Instead, proceed down the path of learning Kubernetes and employ it in your enterprise. There will come a tipping point where you will appreciate the need for a service mesh. Also, when the number of microservices in our project increases, we will naturally develop familiarity with the obstacles that a service mesh will solve. This will help us prepare and plan our service mesh journey when the perfect time arrives.
Image source: NGINX
As the complexity of the application increases, implementing a service mesh becomes a realistic alternative to implementing capabilities service-by-service. This is very well explained in NGINX’s article “Do I Need a Service Mesh?”
By reducing the complexities involved in a microservices architecture, service meshes offer a wide variety of features and have emerged as great DevOps agents. They are becoming a must if you are adopting a cloud native way. Companies that adopt service meshes see no signs of slowing down.
While employing microservices architectures, it is essential to assure the storage and safety of artifacts such as binaries, container images, secrets, metadata, etc. Hence, it is highly recommended that you employ a robust Artifact repository manager such as Artifactory, which can act as your Docker and Kubernetes registry for a smooth and worry-free microservices deployment process, and use that foundation to dive deeply into a service mesh approach as your needs increase.
The Cloud Native Computing Foundation is a sponsor of The New Stack.
Feature image via Pixabay.