Avi Networks: Microservices Can’t Be Automated with Monolithic Tools

An application delivery controller is one of those classes of networking devices that, in recent years, has found itself virtualized — converted from a device or a set of devices into software. Manufacturers such as F5 Networks, Kemp Technologies, and Barracuda Networks produce ADC appliances for data centers. The virtualized variety of ADC promises to make data centers more flexible and to make networks more adaptable to the workloads they host. If you’re a regular reader of The New Stack, you know about perhaps the market’s fastest-growing software-based ADC, even if you don’t think about it as such: NGINX Plus.
A producer of software-based ADC called Avi Networks is now making the case that a delivery controller specially tuned for a microservices environment could not only oust NGINX from its exalted position, but also configuration management systems as we have come to know them.
As Guru Chahal, Avi’s vice president for product explained to The New Stack in an interview, he believes configuration management no longer makes sense in microservices environments, if and when its primary objective is to configure applications rather than the microservices themselves.
Breakout
Developers are winning the argument in favor of microservices and decomposing monoliths, Chahal said, because services in this new model are more manageable, independently upgradable, and faster to market. “Then as it comes into production with IT operations, it hits this tool-chain wall, where the tooling today is these big boxes that have to be configured one-by-one. You throw the application over the fence, and of course, IT goes out and configures these boxes.
“And what is really required,” he continued, “is each microservice’s ability, as it gets deployed, to automatically request and get a certain set of services from the underlying infrastructure. Those services include application delivery, real-time monitoring, security, and so on, without any interaction between the microservices developers and the underlying IT operations teams.”
Chahal offered an example involving an application consisting of about 20 services that exchange requests and responses with one another in real-time. Each of these services is orchestrated and scheduled within a system including Mesos and Marathon. For each of these services to scale themselves, in a properly constructed and secure system, Chahal argues it requires load balancing, firewall, and performance monitoring resources. (In fairness, NGINX does make use of an open source Web application firewall called Naxsi.)
“That’s your tool-chain wall,” he argued. “What happens is, as soon as they get deployed, IT has to go out, log into a box somewhere, configure 20 load-balancers for those 20 servers, configure 20 security parameters, 20 monitoring systems, and so on. And the lack of automation, and this legacy, appliance-based model, [form] this big chasm, that big wall.”
Developers would prefer, said Chahal, for the system to detect what resources the microservice needs, and how much it requires, at the time of deployment—on-demand. “The principles for why I’m going from a monolith to a microservice, are the principles that need to be applied to the underlying infrastructure. So there’s no ‘hurry-up-and-wait’ scenario in IT, where the developers are making applications in microservices architectures, and waiting for the tool chain to catch up.”
IT automation as we’ve come to think of it does not actually automate microservices at all. In a way, it resists them.
Advocates of IT-driven CI/CD make the case for automating the deployment process by breaking it down into small groups of repeatable steps. This process, they say, standardizes the organization’s approach to delivering software to production, while at the same time making the integration process more rigorous, and enforcing standards and practices.
Guru Chahal is making an extraordinary counter-argument: Repeatable processes based on deploying monoliths or parts of monoliths, he says, do not apply in microservices environments where workloads are fluid and services are continually evolving. So any rigorous attempt to automate a microservices deployment process, be it with a long script or a short pipeline, fails over shorter and shorter periods of time to apply itself to an ever-changing workload.
If you follow Chahal’s argument to its conclusion, he’s actually saying that IT automation as we’ve come to think of it does not actually automate microservices at all. In a way, it resists them.
“The issue here is, you can have a certain set of steps that are configuring the same hardware appliance over and over,” declared Chahal. “But the problem is, those hardware appliances are static assets that are not spun up on-demand.” Typically, he noted, the underlying infrastructure is not software-based, not distributed, and not mirroring the microservices architecture.
“It becomes very challenging to actually bring those microservices into operation on a day-to-day basis,” he continued, “with a level of security and availability that customers are looking for.”
It’s a very compelling case—much like saying, you can’t set the precise course of a sailboat for any given day. But it raises a corollary that’s due for a collision with the principal argument of CI/CD: that agility is achieved through standardized configuration and incremental iteration. The fact that both extremes are being argued so vehemently today is testament to the fact that deploying services on this scale are so new that, to the extent we proclaim we know what we’re doing from experience, we’re fooling ourselves.
Dynamic Duo
To make Chahal’s argument work in practice, Avi’s ADC needs to be sensitive to the environment around it — to detect the nature and dynamics of the traffic it’s handling, and then to provide other services with visibility into that data. Three months ago at Dynatrace Perform, NGINX make a very similar argument.
With the ADC positioned where it is within the microservices architecture, Avi’s Chahal stated, “we’re at such a privileged position in front of the application that, without the need for any agents inside the application, or any changes to the clients that are accessing the application, we can detect things like overall latency on the network, the latency of application response, down to what kind of browser you’re using, the page load time within the browser, the sequence in which you’re loading the objects within the Web page, and how developers need to optimize them for you to have an optimal user experience.”

A cloud application delivery platform, such as Avi’s, automates many of the tasks still manually executed with either hardware or virtual ADCs, including software defined networking, management and adding capacities (Source: Avi Networks).
Chahal told us that Avi Networks does not mean to replace application performance managers like New Relic and Dynatrace, but instead complement them, first by acting as sensors on their behalf, and secondly by clearing the way, if you will, for their dedicated agents to do the work they’re designed to perform. He suggested that we think of APM (application performance management) and ADC as providing two dimensions of insight into performance, where ADC is more horizontal and pertinent to the system as a whole, and APM concentrates on each vertical application unto itself. But he did suggest that APMs evolve to enable drilling down performance data to the microservice level.
“As I see the APM landscape evolving… if you fast-forward this about a year or two,” said Chahal, “I have no doubt that, with technologies like Avi and a comprehensive container services fabric, and what the APM vendors are doing, customers are going to be able to go to a single dashboard and assess and quantify the benefits that they’re getting from a microservices architecture, on all three primary vectors: How quickly am I bringing an application to market? How quickly am I able to make a change to an application? How quickly am I able to respond to changes in the overall capacity and usage of the application? It’ll get much easier to quantify the benefits on those three vectors as time progresses.”
New Relic is a sponsor of The New Stack.
Feature Image: inside Ponte Tower in Johannesburg, South Africa by Spach Los, licensed via Creative Commons 2.0.