Citrix sponsored this post.
Relying on microservices to achieve faster release cycles, modularity, automated scaling and application portability is increasingly a hallmark of an organization far along in its digital journey. However, less well understood is managing the effects of the resulting agility, which introduces a great deal of complexity for effective application delivery.
When ensuring the best experience for end users, the choice of proxy architecture and application delivery controllers (ADCs) truly matter. It must provide the right level of security, observability, advanced traffic management and troubleshooting capabilities and complement your open source tools strategy. Additionally, the proxy architecture must accommodate both north-south (N-S) traffic and inter-microservices east-west (E-W) traffic needs.
Load balancing for monolithic applications is straightforward. But the same cannot be said of the much more complex load-balancing needs for microservices-based applications.
This four-part series will evaluate the four proxy architectures for microservices-based application delivery for seven key criteria.
The Trade-Off Between Benefits and Complexity
Make no mistake: Microservices architectures are complex. Best practices are evolving rapidly with the advances in technology, fueled by open source innovation. Different architectures offer unique benefits but also present varying levels of complexity. A lot of times it comes down to a choice between desired benefits such as security, observability and complexity. This is especially true when you consider the skill sets required to implement a particular architecture and the features that you must add to ensure that all stakeholders’ needs are met.
The Balancing Act for Diverse Stakeholder Needs
Architecture choice is further complicated by the fact that different stakeholders care about different things, so the evaluation criteria is always different. Platform teams are the connective tissue in an organization on a microservices application journey and they care about Kubernetes platform governance, operational efficiency and developer agility. DevOps teams care about faster releases, automation, canary testing and progressive rollouts. SREs are most concerned with application availability, observability and incident response. DevSecOps focus on application and infrastructure security and automation. NetOps teams are obsessed with network management, visibility, policy enforcement and compliance. And the microservices application delivery architecture must balance all of their needs.
Choosing the right proxy architecture is no easy feat. In making any decision, it’s important to take the long view and assess architecture options using seven key criteria for N-S and E-W traffic both:
- Application security.
- Continuous deployment.
- Scalability and performance.
- Open source tools integration.
- Istio support for open source control plane.
- IT skillset required.
In doing so, organizations can ensure they are well-positioned to securely and reliably deliver applications now and in the future and deliver a world-class experience that transforms their operations.
Considering the Architecture Options
When it comes to proxy architecture today, there are four options to consider:
- Two-tier ingress.
- Unified ingress.
- Service mesh.
- Service mesh lite.
For both the cloud native novice and expert, two-tier ingress proxy architecture is the simplest and fastest route for deploying production-grade applications. N-S traffic load balancing is split into two tiers for simplicity of demarcation of two admin domains: platform and networking team. Inter-microservices nodes (E-W) traffic load balancing uses simple open source L4 kube-proxy. Minimal training is required for the platform and networking teams, so both teams can move at their own speed. The two-tier ingress option offers great security, traffic management and observability for N-S traffic, but E-W traffic is not well covered.
A step up from 2-tier ingress, unified ingress is moderately simple to implement for networking-savvy platform teams. Unified ingress reduces a N-S proxy tier and removes one hop of latency. Inter-microservices nodes (E-W) traffic load balancing uses simple open source L4 kube-proxy. It is suitable for internal applications and offers the option to later add a web application firewall, SSL termination and external applications. Similar to two-tier ingress architecture, unified ingress provides excellent security, traffic management and observability for N-S traffic, but E-W traffic is not well covered.
The most advanced and complex architecture, service mesh has emerged only recently. Service mesh employs a sidecar for each microservice pod, enabling E-W traffic to be inspected and managed as it enters and leaves the pod. Therefore, it is able to offer the highest levels of observability, security and fine-grained management for traffic among microservices. Select repetitive microservices functions like encryption can be offloaded to the sidecars. Service mesh has a steep learning curve for platform teams as it is a complex architecture.
Service Mesh Lite
For those who want the added security, observability and advanced traffic management that a service mesh brings but prefer a simpler architecture, service mesh lite architecture is a viable alternative. Rather than employing a sidecar on each pod, a set of proxies is deployed inside the Kubernetes cluster (e.g., proxy per node) through which all inter-pod traffic flows. Service mesh lite requires minimal training for the platform and networking teams and offers an easy transition from two-tier ingress architecture.
Stay tuned for a deep dive on two-tier ingress proxy and unified ingress architectures in Part 2 of this series.
Feature image via Pixabay.