Microservices / Service Mesh / Sponsored / Contributed

Part 3: The Best Way to Select a Proxy Architecture for Microservices Application Delivery

16 Jan 2020 1:25pm, by

Citrix sponsored this post.

Pankaj Gupta
Pankaj is senior director of cloud native application delivery solutions at Citrix. Pankaj advises customers for hybrid multicloud microservices application-delivery strategies. In prior roles at Cisco, he spearheaded strategic marketing initiatives for its networking, security and software portfolios. Pankaj is passionate about working with the DevOps community on best practices for microservices- and Kubernetes-based application delivery.

This article is the third in a four-part series on evaluating proxy architectures for the delivery of microservices-based applications. The first article provided an overview of evaluation criteria and a summary of various architectures. The second article was an analysis of two-tier ingress proxy and unified ingress architectures. This article will focus on service mesh architecture.

Service mesh is the latest and most modern application architecture. Its popularity has exploded recently because it offers the best observability, security and fine-grained management for traffic among microservices — that is, for east-west (E-W) traffic. However, it is a complex architecture,  and may or may not be right for your organization.

Here are a few things to consider:

A typical service mesh architecture is similar to the two-tier ingress proxy architecture for north-south (N-S) traffic and has the same rich benefits outlined in my previous post.

The key difference between service mesh and two-tier ingress and where most of the value lies, is that service mesh employs lightweight application delivery controllers (ADCs) as a sidecar for each microservice pod for E-W traffic. Microservices also do not communicate directly; communication among microservices happens via the sidecar, which enables inter-pod traffic to be inspected and managed as it enters and leaves the pods.

By using proxy sidecars, service mesh offers the highest levels of observability, security and fine-grained traffic management and control among microservices. Additionally, select repetitive microservice functions like retries and encryption can be offloaded to the sidecars. Despite the fact that each sidecar is assigned its own memory and CPU resources, sidecars are typically lightweight.

For a sidecar, an open source solution like Envoy or a solution like Citrix CPX can be chosen. Sidecars, which are managed by the platform team and are attached to each pod, create a highly scalable and distributed architecture, but they also add enormous complexity since they add a lot more moving parts.

Let’s evaluate the service mesh proxy architecture against the following seven criteria that are top of mind for various stakeholders across the organization.

Application Security

Sidecars offer the best security for E-W traffic among microservices. Essentially, every API call between microservices is proxied via the sidecars for better security. Authentication among microservices can be enforced. Policies and control can be set to prevent misuse. Traffic among microservices can be inspected to check for any security vulnerabilities.

Additionally, encryption can be mandated among microservices communications and encryption functions can be offloaded to sidecars. And to prevent microservices from being overwhelmed and failing, traffic among microservices can be rate limited. For example, if a microservice can only receive 100 calls per second, a rate limit can be set.

With service mesh, the security for N-S traffic is excellent and on par with that offered by two-tier architecture. For applications with strict regulatory or advanced security requirements, such as those deployed by the finance and defense industries, service mesh architecture is the best choice. Bottom line: Service mesh provides excellent security for both N-S and E-W traffic.

Observability

Service mesh offers the best observability for E-W traffic among microservices because all inter-pod traffic is visible to the sidecars. Telemetry from sidecars can be analyzed by open source or vendor-provided analysis tools to get better insights for faster troubleshooting or capacity planning. N-S traffic observability is excellent with service mesh architecture and on par with the level provided by two-tier ingress proxy architecture. Bottom line: Service mesh provides excellent observability for both N-S and E-W traffic.

Continuous Deployment

With service mesh, advanced traffic management for continuous deployment like automated canary deployment, progressive rollout, blue-green deployment and rollback is supported for both N-S and E-W traffic alike. Unlike kube-proxy, sidecars have advanced APIs that enable them to integrate with CI/CD solutions like Spinnaker.

Bottom line: Service mesh provides excellent continuous deployment capabilities for both N-S and E-W traffic.

Scalability and Performance

Service mesh is highly scalable for E-W traffic because it is a distributed architecture. It also helps to scale features like observability, security and advanced traffic management and control. And there are the added benefits of moving repetitive functions from microservices to the sidecars, including excellent candidates like retry, circuit breaker and encryption.

Performance is dependent upon choice of sidecar because both performance and latency can vary among sidecar vendors. Since E-W traffic is proxied by the sidecars, using sidecars will add two extra hops to inter-pod traffic, which will increase overall latency. If an Istio control plane is used, it adds additional latency with one additional hop to the Istio Mixer which provides policy enforcement. Running sidecars on every pod requires memory and CPU and that can add up very quickly for hundreds or thousands of pods.

Service mesh offers excellent N-S traffic scalability and performance that is on par with two-tier ingress proxy architecture.

Bottom line: Service mesh offers excellent scalability and performance for N-S traffic and while it is excellent for E-W traffic as well, watch out for latency impact and CPU/memory requirements that increase linearly with pod count.

Open Source Tools Integration

ADCs for N-S traffic and sidecars for E-W traffic both integrate with popular open source tools like Prometheus, Grafana, Spinnaker, Elasticsearch, Fluentd and Kibana for data collection, monitoring, analysis and CI/CD. Most sidecars have extensive APIs for integration with various tools.

Bottom line: Service mesh offers excellent open source tools integration for both N-S and E-W traffic.

Istio Support for Open Source Control Plane

ADCs for N-S traffic and sidecars for E-W traffic both integrate well with Istio an open source control plane. Be aware that Istio adds the latency of one extra hop to Istio Mixer, which provides for policy enforcement for E-W traffic.

Bottom line: Istio integration is supported for N-S as well as E-W traffic.

A Required IT Skill Set

Service mesh is extremely complex. Managing hundreds or thousands of sidecars can be a big challenge. This new distributed proxy architecture poses a steep learning curve for IT. The main challenge for the platform team is probably managing so many moving parts with sidecars. The platform team has to get a handle on the latency and capacity requirements. They have to be able to troubleshoot problems in any number of distributed proxies as well as data plane and Istio control plane components. It doesn’t help that the technology is new — the well of knowledge is still shallow. And there is a talent shortage, too.

Bottom line: Platform team will need to step up and invest in learning because implementing a service mesh architecture is complex and can become even more so as scale increases.

Service mesh proxy architecture is the most advanced architecture for security, observability, fine-grained traffic management, open source tools support and Istio integrations for N-S and E-W traffic. It is also a suitable choice for deploying ultra-secure microservices. But it comes with challenges. If you’re not prepared for managing a large number of sidecars and added latency or lack the resources to manage very complex implementations requiring a steep learning curve, it may not be right for your organization.

The next article will focus on the “service mesh lite” architecture, which provides service mesh-like benefits but is much easier to implement and manage. Stay tuned.

Feature image from Pixabay.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.