Modal Title
Cloud Native Ecosystem / Containers / Microservices / Networking

F5 Networks: Containers Need Both Networking and Orchestration

Apr 28th, 2017 12:51pm by
Featued image for: F5 Networks: Containers Need Both Networking and Orchestration

We try to portray it as a blending of responsibilities and job functions, especially when we’re trying to sell a product to two customer bases simultaneously: DevOps, the merger of development and automation. Yet there are too many technologies intentionally developed to bolster the layers of abstraction between applications and networks. Originally, those layers were supposed to free the applications developer, way up on Layer 7 of the OSI stack, from having to mess about with all the dirty, infrastructural affairs of Layer 2 or 3.

With containerization hoisting both portable deployment and distributed systems into the same spotlight, the result may be a clash of mindsets. On one side of the stage is the argument that serverless development, where the developer never sees or cares about the underlying infrastructure, is truly cloud-native. Making room for itself on the same side of the stage is the equally valid argument that you can’t secure a network you don’t understand. And if a distributed systems application is, as they say, a network, that’s a problem.

“In a cluster, you have to have something that provides the endpoint for clients to connect to,” remarked Lori MacVittie, F5 Networks’ Technical Evangelist, speaking with The New Stack. “A virtual service, a virtual IP, a virtual server, fronting that — scaled by multiple versions of that service or application within the cluster. Something has to provide that on the front end, to say, ‘Hey, I’m your application, and I’ll take care of scaling on the back end.’ No matter what that solution is, it has to have a way to be automatically updated with the right information, because manual processes aren’t going to work here. It has to hook into the environment and work as part of the system.”

Checkpoint Charlie

MacVittie’s discussion with us came by way of F5’s introduction Friday of a component called Container Connector, to a system it started building last November for integrating microservices with existing networks. That system began with Application Connector, a component which links cloud-based applications to F5’s Big-IP application delivery controller. The purpose there is to enable security and firewall policies that also govern access to an application on-premises, to one that’s deployed on a cloud platform, including the public cloud.

Container Connector, announced at the same time but generally released Friday, extends that same premise to microservices. It creates a proxy checkpoint that represents the entire microservices conglomerate as a single entity, for any service or other application attempting to communicate with it. This way, traffic may be monitored and governed in real-time, and scalability can take place in a manner that’s more sensitive to what the incoming requests are, as well as how many. Rather than attempt to usurp Kubernetes or Mesos, the BIG-IP system works with orchestrators and schedulers.

“Let’s say you have an API, and it’s got one server handling it. Once you add a second instance of a container,” explained MacVittie, “something has to provide the endpoint to load-balance across [the service]. Without that proxy in the middle, there’s really not much load balancing.”

Believe it or not, MacVittie said to my astonishment, many phone and tablet apps today actually delegate the logic for which IP address to which it addresses its requests, to the client-side app. Your phone may be making decisions about which IP to talk to, based on its assessment of traffic. If you’ve ever waited in line in a post office along with dozens of people and six active clerks, you know that the length of each line is never a good indicator about the time you may spend there. Imagine a phone app making a similar decision, given a half-dozen or so IP addresses, about which one is most responsive.

Some developers would say, all things being random, that this scheme would resolve the whole load balancing issue. But what it’s really done is shove the issue to the client side, making the app entirely responsible for the traffic patterns that emerge. What’s more, MacVittie told me, app developers try to resolve this issue by creating a kind of external registry — literally, another service that informs them when a queue is backed up, or another one is clearing up.

In such an architecture where there’s an observer service reporting on the state of a random distribution, to quote the great Joe Bob Briggs’ remark on what makes a great horror film, “Anybody can die at any moment.”

“It’s very inefficient to have your client decide which one of fifteen different instances it should choose,” MacVittie said, “when it has really no understanding of how [the service is] performing, what kind of load is on it, or where it might be located.”

This is perhaps the ultimate example of the wrong way to mash up dev with ops. It also illustrates the value of that layer of abstraction — the notion that how an application works should be one or more steps removed from how an application is made to work.

Balancing the Balancers

MacVittie explains that Container Connector monitors an active server cluster for signs that it’s being scaled, and for when new instances come online. CC then informs F5’s Big-IP controller, so that it may respond by registering new instances (or removing old and disused ones) from a pool of addresses to which security policies apply. An application services proxy — another component in the scheme — manages network routing between and to instances. That particular proxy makes it feasible, for example, for instantiated containers that are not actually being used to be more readily removed from the pool.

Big-IP can work with existing load balancers, said MacVittie, including NGINX, HAProxy, and Kubernetes’ native kube-proxy. “We’re sitting on the ingress, and we see those solutions inside the containers,” she told us.  “doing east/west load balancing. Because it’s all protocol-based, we do work with them very easily. I do see a lot of architectures where F5 is the ingress, and there’s all these other load balancers in there. Of course, we prefer that you would use F5, but we don’t require that you do. Because it is all standards-based, we’re going to be able to load balance to whatever you may be using inside the environment.”

More broadly put, the Big-IP system is doing the ops part of the job. And as MacVittie unhesitatingly admitted, it’s the system operator who’s better equipped to handle this job. There are pre-defined templates to help resolve issues about how to respond to orchestrator- or scheduler-triggered events. One such template, for example, may spin up HTTP 1.1 connections for internal-only services, while building HTTP/2 connections for public-facing gateways. Others set up basic web application firewall (WAF) policies that send up flags for cross-site scripting (XSS) events, or SQL injection attempts. Another can scrub outgoing data for the appearance of possible credit card numbers or Social Security numbers.

“There’s no reason a developer couldn’t use [these templates],” she said, “but it seems more suited to an ops person, from my perspective.”

The Aura of the Turnkey

F5’s value proposition is based on a very hands-on approach to managing the virtual network and maintaining its connections with the physical one. VMware has been consistently making the case that maintaining the network security foundation it established between the hypervisor and the processor is critical for building new applications going forward, even when not all those applications will need the hypervisor. It’s why many enterprises are sticking with the virtualization layer they have, rather than fly to others they know not of.

Docker Inc. has evolved its security argument recently to say that container security may be achieved almost entirely in the delivery process — or, to use the phrase CEO Ben Golub borrowed from an earlier era during last week’s DockerCon, the “supply chain.”

“The secure base is the start, but it’s not the end,” said Golub. “We need to somehow replace all of that chewing gum, spit, bailing wire, cursing, etc., with something that looks a bit more like a supply chain. Last year, we introduced this notion of CaaS — Container-as-a-Service. This is basically saying, we want to have tools and processes in place to have the applications that have been built as containers to be deployed as containers, but have a way to connect developers and IT operations — which includes technologies like Secure Image Registry, and a control plane that lets you monitor, manage, and deploy containers in a heterogeneous environment.”

The specifications for how connections should be made between services, Golub argued, should reside on levels that are completely separate from Docker’s CaaS model — at what he describes as the IaaS and PaaS levels. “IaaS is great, PaaS is great,” he noted, “but make sure you don’t end up getting your -aaS in a sling as you’re going through this process.”

F5’s counter-argument seems to be the new distributed systems architecture demands a new approach to networking security that’s at least equally effective as the old approach, if not more so. And that the obscure, strange indeterminacy of microservices architecture will not serve as its security blanket for very long.

IBM would be among the first companies to back up F5’s argument.

“I would claim that regular, middleware-based enterprise applications, in order to enable them for microservices, you need to change them,” stated Andre Tost, IBM Distinguished Software Engineer, in a recent interview with us. “You need to refactor them. There’s work to be done. We find lots and lots of companies that don’t have the time or energy or funding or motivation or incentive to do any of that. It’s not saying, ‘I have one consistent enterprise architecture, and want to have all my applications fall into that category.’ I just don’t think it’s realistic.”

F5’s Lori MacVittie is not unaware of the business factors that make, for example, Docker Inc.’s new push for one-button monolith containerization, so very attractive to CIOs.

“As a very young developer,” she told The New Stack, “I was always wholly frustrated by the refusal of the large enterprises I worked for to re-architect and modernize. ‘Why do we have all this stuff? This is ridiculous!’ Over time, I learned there are business reasons and financial reasons, and the costs to actually re-architect can be so overwhelming that it’s just not going to happen. The return on investment never pays off. She cited a case study where a bank in Australia took the plunge, with a full re-architecture of its old mainframe-based business logic, at a cost that eventually tripped three-quarters of a billion dollars.

“The notion that you can take an application, wrap it in a Docker container, and then get a performance boost as well as more frictionless scale by simply launching multiple containers, putting a proxy in front, and letting it scale up and down as needed, is probably very attractive to a lot of organizations,” she continued, noting surveys that indicate enterprises are attracted to the instant portability this method provides, letting them “lift and ship” their applications into the public cloud.

And then she admitted that some of this turnkey functionality may yet be incorporated into a system like F5’s Big-IP. It will be the merger of mindsets after all — just not the two we thought.

Title image of a vintage British telephone exchange switchboard circa 1925-1960 from the British Science Museum, licensed under Creative Commons 4.0.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.