Kubernetes / Programming Languages

F5 Networks Fuses Node.js with Load Balancing for Workflow Orchestration

17 May 2016 2:00am, by

Traditionally, this site has been about orchestrating applications at scale. We’ve learned that thinking of the applications as the units that need to scale up is the wrong way to go.  Instead, if you break down applications into services (“microservices”) and then scale the services, you have a more effective means of scaling a complex application to its exact workload.

As network engineers integrate more with the microservices-oriented software development community, the skills they bring with them center around service definition. Tuesday morning, at a conference in Vienna, Austria, F5 Networks announced a significant upgrade to its application delivery controller (ADC) that actually addresses service definition and the scaling of services to get jobs done.

In a clear effort to win support from the DevOps community, F5 is releasing a workflow automation API platform called iWorkflow 2.0 as part of its latest upgrade to its Big-IP SDN infrastructure platform.  That API is designed not just around the open source network scripting language TCL (pronounced “tickle,” having originated in the network community) but around Node.js, the now-indispensable tool of the DevOps community.

The Node Explosion

“We had previously supported application logic on our platform using TCL, that allows you to rewrite URIs, do different types of load balancing to manipulate content, [manipulate] headers — you name it, you could pretty much do it.  It was code,” said Lori MacVittie, F5’s principal evangelist, in an interview with The New Stack.

“But over the years, we’ve seen JavaScript and Node just explode … So one of the things we did with this latest software release is add the ability for those kinds of functions and logic that you might want to put in the proxy, to use Node.js. Now you can write that proxy logic, put it on Big-IP, and let it manipulate traffic, direct it, secure it, with Node.js in the proxy. So you don’t have to put it into every single application, or every single instance.  You can move it upstream and contain it, and then deal with it that way.”

Big-IP is an established platform in the service provider community. MacVittie describes it as a layer in-between end users (the service provider’s customers) and the applications being staged for them.  The platform provides load balancing, caching, and from time to time, manipulating the headers of packets, adjusting URIs, and making real-time configuration adjustments (say, to the httpd.conf file) to better route traffic to more suitable, or more available, servers.

“You put that logic in a Big-IP proxy,” she said, “and it enables those architectures to scale a lot better, because you can do it upstream where it’s got better visibility into the traffic and the requests that are actually coming in.  Big-IP is proxying for the users, and providing some of that application architecture logic that needs to be done in the network to do scale, to do security, to do performance enhancing things on the fly.”

Less About the App, More About the Service

When the topic of SDN (software-defined networking) first entered the discussion circles of software developers, the promise its earliest practitioners extended to them was based around “application-defined networking” — the notion that the app itself could specify the best network for its own purposes for any given period of time.  It seemed like a sensible approach at the time.

But the problem with scaling an application that includes its own logic for scaling is that the logic itself gets replicated, producing unnecessary ballast that weighs down on performance and increases latency.  Microservices architecture would completely resolve that problem, but that assumes you have no legacy applications that must continue being staged as the transition continues.

F5’s approach is to embed the logic in the proxy, freeing the application to contain code that is intrinsic to the app itself.

“We understand HTTP, HTTP/2, WebSockets, HTML5 — all those things that developers are using right now to deploy their apps,” said MacVittie, “and we understand them and can speak that language on Big-IP, and provide an environment where your scale can be based on simple things like load balancing algorithms — round-robin, or least-weighted connections, or fastest response.

“But it also lets you build an architecture, which is really important when you start scaling things like microservices and APIs, where you’re not going to put your entire application on one machine.  They’re going to be spread out across multiple domains.  You want to be able to provide a unified experience on the frontend for the user, so they’re just going to one place; but on the backend, you may be spread out across a whole bunch of different environments.  Maybe one is virtualized and the other is containers; one is an old application that hasn’t been updated yet.  You want to be able to split that out, and load-balance across them, to direct traffic that way.”

With Node.js, MacVittie continued, a developer or operator can address packets of traffic based on the identity or the location of the device currently hosting them, as well as the device that will host them a few hops down.  When your goal is elasticity, you want to be able to stretch the map of the software-defined network your application is using, not just magnify it. The logic needs to adjust when the workload scales, as opposed to the logic scaling in proportion with the workload.

In fairness, this may not be a novel approach: last year NGINX Inc. began embedding a VM whose runtime is based on JavaScript, in its NGINX and NGINX + proxies. And Avi Networks has made the case before that a monolithic approach to scaling does not apply to microservices; that replicating applications as though they were indivisible appliances, does not yield better, or even good, performance.

Yet Tuesday’s development is an indication that the network services market and the DevOps services market are coming together: specifically, they’re targeting the same customer and competing against one another. Which, for all intents and purposes, makes them the same market now.

Sharded Articles

I asked MacVittie to describe to us a few potential real-world, enterprise scenarios for Big-IP. One example she offered was database sharding at the network level. Queries for specific segments of databases can be directed to specific servers at discrete addresses, based on any criteria that can be reflected in JavaScript code. Articles published by a certain technology blog, for instance, may be hosted in multiple databases across various zones (assuming that certain blog were to become way popular, way soon), but pull requests for those articles can be distributed categorically.

“Using Node, you would inspect that URI just like you would do for page routing, when you’re dealing with an API,” she explained. Logic would direct requests to particular pools or to virtual IP (VIP) addresses. In iWorkflow 2.0’s vernacular, these things are objects that are specified in classes and represented in JavaScript variables.

“It exposes things in a very object-oriented manner,” F5’s MacVittie went on, “so you can manipulate things in a familiar environment. Node is pretty straightforward; it’s pretty much Node.  Working with it, I’ve not found any difference between how I use it with a standard Node server than how I’d use it with Big-IP.”

F5 will also be making available an Eclipse plug-in for writing Node.js code for Big-IP and iWorkflow 2.0, when the new platform is generally released some time during the second quarter.

Feature image: Professional plate spinner and juggler Henrik Bothe, from Wikimedia Commons, is licensed under CC BY-SA 4.0.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.