Rethinking Service Mesh with Application Traffic Management
When many customers say they need a service mesh, they’re not talking about an actual service mesh. What they’re really asking for is a better way to manage, observe, secure and scale applications composed of microservices. To put it another way, they don’t want a digger so that they can lay a foundation; they want a hole in the ground. How they get the hole is not nearly as important as the size and depth of the hole.
The lack of clarity about what’s actually needed is a sign that all the hype around Kubernetes and service mesh has drowned out the concept of first principles. The most important element of cloud native applications is the “last mile” delivery of application traffic to end users, whether those users are human customers, servers or IoT devices. For internal applications, the customers are usually other teams within the organization that own services comprised of microservices-based applications.
In the broader technology space outside of Kubernetes and service mesh, responsibility for last-mile delivery falls to application delivery controllers (ADCs), which is a multibillion-dollar industry with dozens of players. ADCs are the most essential contributor to ensuring an excellent end-user experience of applications. In other words, an ADC — and its ability to manage, shape and optimize application (Layer 7) traffic — is the digger that delivers the hole in the ground. It’s the medium for delivering on first principles in application experience.
Within the Kubernetes space, however, ADCs and Layer 7 traffic management are largely missing.
In fact, despite the history and market validation of ADCs, application traffic management is perhaps the least developed element of the cloud native landscape. Kubernetes has traditionally focused on the network layer (Layer 4), with Layer 7 remaining an afterthought. This leaves platform ops teams to fend for themselves or use relatively untested solutions, even for mission-critical applications. So what can Kubernetes and service mesh architects do to up their game and deliver the robust application traffic management required to ensure a superior end-user experience? Here are three suggestions.
1. Deploy a Data Plane with Maximum Layer 7 Capabilities
While there are a variety of options available for transport networking in Kubernetes (think container network interfaces), there is little in the way of application traffic management for Layer 7. By attaching a data plane sidecar, which has rich features for both Layer 4 and Layer 7 traffic, to your application containers, you become able to effectively manage both network and application traffic. The two types of traffic really need to be on equal footing for all the reasons we discussed above. A richer data plane allows you to focus on providing the Layer 7 features your architecture and composite applications need — security, observability, stability and resiliency — while also adding reliability and security at Layer 4. Just pushing packets at an acceptable pace is not enough. You wouldn’t accept only Layer4 functionality outside of Kubernetes, so why accept it within the Kubernetes and service mesh landscape?
2. Architect Your Clusters for Applications, Not Tools
You’ve built your Kubernetes environment to host applications. Yet when designing and sizing clusters, we often ask “How many Ingress controllers will I need?” rather than “What type of supporting delivery services do my apps need?” Counting Ingress controllers rather than considering services is like designing a skyscraper based on how many people are going through the front door, rather than what they need to do in their offices. Applications that must comply with industry standards — as in healthcare, financial services, and government — might require different application services rather than generic microservices. Even the same services can have radically different requirements in two different settings. For example, a data-streaming application is likely to have different availability and security requirements if it’s the backend for continuous monitoring of patient vital signs rather than if it’s tracking inbound weather data.
3. Consider Deploying ADCs as Part of Your Kubernetes Architecture
You’ve invested heavily on application security and resiliency everywhere else in your infrastructure — on-prem with traditional load balancers, virtualized with vADCs and in the cloud with frontend, cloud native load balancers. So why not invest in the same capabilities and infrastructure in Kubernetes and for your service mesh? You plan to use Kubernetes to deliver production applications, right? The last mile may become the last inch in a containerized environment, but traffic management is still critical. ADCs are optimized to deliver applications and embody decades of wisdom gained from shaping, accelerating and filtering traffic.
Conclusion: An ADC is Your Digger
Throughout their history, ADCs evolved to become the frontend traffic solution for every previous paradigm shift in computing infrastructure: bare-metal on-prem, co-lo top-of-rack, virtualization and finally cloud computing. Kubernetes and containerization are the latest iteration, but the need to effectively manage application delivery to ensure a superior user experience and optimize the costs of running global scale applications remains unchanged.
In Kubernetes, you still need content acceleration and caching, SSL termination, web application firewalls and load balancing. These capabilities work best as complementary features within a unified traffic management solution. For Kubernetes, the needs will be the same, just with a slightly different form factor and with requirements to accommodate more ephemeral and dynamic infrastructure. So consider fronting your Kubernetes cluster with cloud native ADCs to up-level application management and make service delivery smoother, more reliable and more secure.