When you ask experts in microservice architecture what the benefits are compared to monolithic design, one of the answers you’ll get has to do with load balancing. A microservice that’s aware of its role in the ecosystem should be quite capable of finding its own proper balance, you’ll be told, and Docker’s role is to facilitate microservices.
Maybe, but until the time that every application disassembles itself into millions of all-seeing, self-sensing “nanites,” Docker containers will contain just programs. The moment those programs become the least bit distributed in design — for example, when the database functions split off from the analysis functions — something will have to serve as the containers’ load balancer.
Upending the Reverse Proxy Decision
Up to now, one of the leading options has been HAProxy, which our Alex Williams admits would require an article much longer than this one to explain. Others prefer Nginx, which some say is simpler and has a good track record.
Last February, a company called Appcito launched an application delivery service called Cloud Application Front-End (CAFE), a kind of distributed application deployment and management service for AWS, with its own reverse proxy on the back-end. It’s an effort to provide a more sensible, graphical front-end, complete with analytics tools, for the deployment of distributed applications that may or may not include microservices.
Last week, taking the obvious next step, Appcito extended CAFE to support Docker.
“Once you start working with Docker,” wrote Appcito vice president for product and strategy Siva Mandalam, in a company blog post last Saturday, “you’ll quickly discover that there are not only management tasks associated with containers, but also application infrastructure services needed for ensuring that your microservices-based applications running inside containers are always available, secure and performing well.”
Given Docker users’ current preference for Nginx and HAProxy, should developers — intending their applications to use an arguably more sophisticated reverse proxy — start thinking about redesigning them? After all, there’s that microservices design ideal looming overhead that says apps should load-balance themselves.
I put this question to Siva Mandalam.
“HAProxy or Nginx deployment requires work. Changing configuration of proxies on an application basis requires more work,” he told The New Stack by e-mail.
CAFE’s Active Agents
Appcito has created two classes of active agent for CAFE, each of which is assigned a duty in one end of the SDN topology. In the control plane there’s Barista, which Appcito describes as the centralized application services controller (ASC), where the built-in analytics functions reside. In the data plane, CAFE stations what are called Policy Execution Proxies (PEP, not to be confused with performance-enhancing proxies, which I believe were banned by Major League Baseball). These policies are crafted within Barista and then enforced by the PEPs.
“PEP proxies can be quickly deployed, reconfigured, or removed without changes to microservices or other components of the application stack,” explained Mandalam. “Quick onboarding and central management of granular application policies via a Web-based user interface significantly reduces administrative overhead.”
Mandalam described PEPs as being delivered in “an elastic, highly available, resilient cluster” that auto-scales in accordance with the workloads they support.
“PEPs are placed close to your application infrastructure to ensure low-latency,” he continued. “In addition, to ensure session continuity among all of these dynamic systems, Appcito CAFE persistently manages state for every session within the logical PEP (and across all PEP instances in all zones). This shared and persistent state store ensures zero down time and accelerated availability of the application. While provisioning CAFE, you don’t need to worry about capacity planning or high availability. Data path components of CAFE are always highly available. Also, they scale up and down as traffic to your application demands.”
How are application-specific policies crafted and maintained in CAFE, I asked Appcito’s Mandalam, and how does microservice architecture change the way those policies are implemented?
“Through Barista Application Services Controller UI,” he responded, “which provides policy framework for crafting and maintaining policies at the app level or at microservice level. The main change is the ability to offer more control over how service policies are applied at app level, including non-disruptive rolling upgrade of a service, for example; and dynamic control in that, when environment and infrastructure change (think public cloud), you need to dynamically learn and adjust policies to let your application meet performance criteria.”
Mandalam is indeed suggesting here that microservices should get a grip on the environments in which they run, in order to optimize their performance. This is not because they should be designed for specific environments including CAFE, evidently, but for precisely the opposite reason: because the nature and even the identity of these environments are subject to change.
Traffic at the Higher Level
Typical SDNs don’t accommodate much traffic steering at Layer 7 of the network, where the application resides. I asked Siva Mandalam how CAFE fills that gap, and he responded with this five-point list:
- Control of incoming traffic: rate-limiting at container and object level at ingress to PEP, and also limiting traffic based on app server capacity at the egress point.
- Non-stationary water marks: Learn file access time periodicity at multiple granularity: daily, weekly, monthly.
- Recommendations on policy and traffic steering based on analyzed traffic profiles: Low frequency High volume/High frequency Low volume.
- Load balancing using size of data payload: (a) Both Barista ASC and PEPs allow for unlimited number of storage URLs and objects; (b) Policies can be used to rate limit traffic at a tenant, container or object level to minimize latencies.
- Balance across unique clusters: Auto-scaling or new containers of service engines are automatically discovered, and load balanced across clusters with default policies (these capabilities can be customized).
“Appcito CAFE provides fine grain analytics and integration with other platforms as well as improved introspection,” he added. “CAFE enables integration to access data from other analytics platforms, enabling enhanced drill down, data cross-correlation, and anomaly detection. It can provide visibility into top files/objects, top containers, total traffic in objects, bytes, containers, latency and root cause analysis of any anomaly detected.
This integration enables comprehensive drill down capability for fast debugging and root cause analysis of errors.
“Appcito CAFE can also be integrated into frameworks for alerts and event-generation tools.”
The key benefit for DevOps and admins, Mandalam argues, is to free them to concentrate on their application. Novices in the IT department don’t need to learn Nginx’s or HAProxy’s configuration syntaxes in order to become competent with application deployment and management.
We asked Nginx’ Head of Products Owen Garrett to comment on Appcito’s assertion.
Garrett downplayed the notion that the use of a configuration file is somehow convoluted, noting that its format is used by nearly one-quarter of Web sites, according to this frequently updated chart.
“It’s common knowledge how to drive Nginx configuration using standard orchestration tools such as Puppet/Chef,” Garrett told The New Stack. “No proprietary APIs or ‘central management systems’ required. Users don’t want to use a GUI to configure the front-end proxy when they are operating a microservices platform at scale. They want to select the orchestration and deployment tools that suit their own infrastructure, and they need each product they use to interoperate easily with them.”
The Nginx executive conceded that the Appcito announcement is indicative of “a real hunger for application proxy devices to manage traffic to and within a Docker-based infrastructure. Best practices are changing rapidly as vendors and open source projects deliver solutions to these challenges.”
But then he went on: “Users would be wise to avoid making early decisions that lock them in to proprietary, closed approaches and should look to a combination of open source and vendor-supported open-core solutions to help them accelerate their deployments while leaving freedom to innovate as required.”
Here’s the part of the argument that remains to be settled in the course of history: If the developer becomes free to concentrate on design and architecture, as Appcito’s Mandalam suggests, then would the results of that concentration include microservices that load-balance themselves, as developers suggest? Or instead, would microservices begin expecting policies at the deployment level to sort out the traffic issues? In the latter case, Nginx and HAProxy could start looking outmoded.
Feature image via Flickr Creative Commons.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.