Release 7 of NGINX Plus and the Implications for Microservices Architectures
Release 7 of NGINX Plus, the web services gateway, puts into place a component that may prove to be critical to the advancement of both microservices architectures and the Internet of Things.
Software developers are enamored with the possibilities opened up by the advent of microservices architectures. CIOs are sold on the ideal of the Internet of Things, which is part fantasy and part marketing: a possible explosion in the quantity of applications made feasible through smaller, more widely distributed, IP-capable sensors and other devices. Both of these ideals need a fatter, broader, more adept Web if they are ever to escape the embryonic phase.
A Microservice Fast Lane
The principal improvement to R7 is added support for HTTP/2, the long overdue and somewhat controversial version of the Web’s main transport protocol, formally adopted by the IETF last February. Its original intent was to radically improve the methods with which web pages were assembled, but HTTP/2 has since become a conduit for the first real generation of non-experimental microservices.
In a blog post for F5 Networks last June, former Network Computing correspondent Lori MacVittie — now a technical evangelist with F5 — explained one of the key problems with implementing microservices on HTTP, and how HTTP/2 could either resolve these problems completely or compound them exponentially, depending on how application architects adopt the new scheme.
MacVittie described how web architects created the practice of domain sharding — creating arbitrary, separate domains for distributed resources — as a way of better facilitating requests for web pages comprised of multiple components. By doling out those components to individual domains handled by separate servers, the overall time for servicing those pages may be expedited. Some vendors tout the virtues of domain sharding as a performance accelerator.
“Domain sharding works,” wrote Mobify engineer Peter McLachlan in 2012, “because web browsers recognize each unique Internet name … as being a different server, even if in reality all those domain names resolve to a single server.”
HTTP/2 was intended to render the sharding practice obsolete, by adopting Google’s SPDY method of handling an unlimited number of requests in parallel without blocking. Obsolescence of this practice became more and more necessary as web browsers increased their number of simultaneous connections to servers, beyond the original limit of two.
Microservices, by their very nature, divide service categories into discrete domains. As MacVittie noted, this could be considered a form of domain sharding as well. On the surface, it would appear HTTP/2 would resolve this problem as well, effectively neutralizing two birds with one stone (I’ll refrain from referring to this as “killing,” since that evokes images of inhumane cruelty).
But she warns that, left to their own devices, microservices architects could end up hard-wiring these separate domains into their DNS patterns anyway, presenting these multiple entities to clients as exchanges that each requires their own, separate connections.
“What we can do is insert a layer 7 load balancer between the client and the local microservice load balancers,” wrote MacVittie. “The connection on the client side maintains a single connection in the manner specified (and preferred) by HTTP/2 and requires only a single DNS lookup, one TCP session start up, and incurs the penalties from TCP slow start only once. On the service side, the layer 7 load balancer also maintains persistent connections to the local, domain load balancing services which also reduces the impact of session management on performance. Each of the local, domain load balancing services can be optimized to best distribute requests for each service. Each maintains its own algorithm and monitoring configurations which are unique to the service to ensure optimal performance.”
She didn’t say it explicitly, but by “Layer 7 load balancer,” she could have simply inserted “NGINX.”
“A Layer 7 load balancer terminates the network traffic and reads the message within,” reads NGINX’ corporate website. “It can make a load-balancing decision based on the content of the message (the URL or cookie, for example). It then makes a new TCP connection to the selected upstream server (or reuses an existing one, by means of HTTP keep-alives) and writes the request to the server.”
The First True Test of HTTP/2
The newest version of HTTP replaces the text-based protocol, as originally envisioned by Tim Berners-Lee, with a binary system that could have required the use of TLS/SSL encryption, but ended up making it optional. In any event, the Internet Engineering Task Force did end up formally adopting the basis of Google’s SPDY proposal for parallel request handling.
The fact that not everyone was pleased with that decision was made clear in the third paragraph of IETF Working Group Chair Mark Nottingham’s pronouncement last February that “HTTP/2 is Done.”
“While a few have painted Google as forcing the protocol upon us,” wrote Nottingham, “anyone who actually interacted with Mike [Belshe] and Roberto [Peon, two Google engineers] in the group knows that they came with the best of intent, patiently explaining the reasoning behind their design, taking in criticism, and working with everyone to evolve the protocol.”
The remaining objections among web developers to what they perceive as the wholesale “swallowing” of one vendor’s technology, was best summed up last January by FreeBSD contributor Poul-Henning Kamp, lead developer for an HTTP accelerator called Varnish.
“The IETF, obviously fearing irrelevance, hastily ‘discovered’ that the HTTP/1.1 protocol needed an update, and tasked a working group with preparing it on an unrealistically short schedule,” wrote Kamp.
“This ruled out any basis for the new HTTP/2.0 other than the SPDY protocol. With only the most hideous of SPDY’s warts removed, and all other attempts at improvement rejected as ‘not in scope,’ ‘too late,’ or ‘no consensus,’ the IETF can now claim relevance and victory by conceding practically every principle ever held dear in return for the privilege of rubber-stamping Google’s initiative.”
Kamp’s fears have since been reflected in the echo chamber that is Hacker News. “We’re far into ‘worse is better’ territory now,” wrote one contributor whose handle is cromwellian.
“Technical masterpieces are the enemy of the good. It’s unlikely HTTP is going to be replaced with a radical redesign any more than TCP/IP is going to be replaced.”
Hacker News members discussed whether the general reticence over Google’s involvement in HTTP/2 would lead to an IPv6-like situation, where overall adoption is slow. One member, sounding a more hopeful tone, said only browser vendors and content providers need to adopt HTTP/2 in order to make it widespread, unlike IPv6 which requires intermediate router support as well.
All this political bickering impacts the evolution of microservices in the following way: As long as there remains skepticism about the overall benefits of HTTP/2, HTTP 1.1 will continue to exist as a feature of any widespread, perhaps federated, microservices initiative.
This puts NGINX in the position of having to straddle both protocols simultaneously, maybe for an extended period of time. Last August, the commercial entity behind NGINX began addressing fears that the political dispute could force microservices engineers to rethink how they construct their applications.
“With NGINX, HTTP/2 can be supported with very little change to application architecture,” wrote NGINX’ Faisal Memon. “NGINX acts as an ‘HTTP/2 gateway’ to ease the transition to the new protocol. On the front end, NGINX talks HTTP/2 to client web browsers that support it, and on the back end it talks HTTP/1.x (or FastCGI, uwsgi, SCGI) just as before. In between, NGINX translates between HTTP/2 and HTTP/1.x (or FastCGI, etc). This means that servers and applications proxied by NGINX will be unaffected by the move to HTTP/2, and don’t really even need to know that their clients are using HTTP/2.”
The one big challenge with building applications with stacks resting on an infrastructure with a global footprint, is coexisting with certain other players’ big feet.