Fracturing a site’s contents into a plurality of arbitrary domains to improve performance is still being taught as a legitimate practice for Web servers. It remains the best way to get around HTTP 1.1’s maximum limitation of six connections per host, and the, even more, draconian limit of two connections for older browsers such as Internet Explorer 7.
HTTP 1.1 was not designed to support the Web we actually use today, not to mention its inadequacy as the transport protocol for a world of microservices. Last September, NGINX introduced Release 7 of its commercial Web service gateway NGINX Plus. With it came the first support for HTTP/2, the modern transport protocol ratified by the IETF last February, enabling channel multiplexing between servers and browsers limited by six or even eight simultaneous connections per host.
On Tuesday, NGINX announced the availability of Release 8. According to NGINX Head of Products Owen Garrett, the differences between NGINX Plus Release 8 and its four-month-old predecessor are not only quite substantial but actually better suited for HTTP/2 in production than Release 7 was.
The Much Readier for Prime Time Player
“In September of last year, we were careful to explain to our users that they shouldn’t use it as a production solution,” Garrett told us. “We released it as a preview of the technology to allow users to begin the task of migrating from SPDY to HTTP/2.” SPDY was the protocol first proposed by Google as the next generation of HTTP. Some of SPDY’s components were indeed adopted in the final HTTP/2 draft, and Google began rolling out HTTP/2 support for version 40 of its Chrome browser in January 2015. But even then, SPDY was slow to make its way into the sunset.
“We shared those cautions for a couple of reasons,” continued Garrett. “One was the potential immaturity of our own code — the fact that there was a large amount of significant changes to the core of the product necessary to support HTTP/2. But more importantly, at the time, the large body of Web users haven’t moved on to a Web browser or platform that supported HTTP/2. SPDY was still dominant.”
Thanks to quite a bit of consultation with users, NGINX now feels it’s much more confident that the HTTP/2 implementation in Release 8 of NGINX Plus is ready to be deployed in the production phase of organizations. Garrett pointed to content delivery network CloudFlare as one example of a major Internet service now using R8 code in production.
“So in terms of production-ready, absolutely, it is there now,” he said.
Even though it’s been a full year since Web browser makers began weaning their users away from Google’s SPDY protocol, Garrett said his company’s own data reveals that only now have the usage levels of HTTP/2 and SPDY met with one another. Now, he believes, Web services can safely begin dismantling the practice of domain sharding, and stop creating false domains in order to bypass a protocol limitation from a bygone age.
Microservice architectures can now change the Web from a conveyor of hypertext to a facilitator of digital service. But simply opening the floodgates to multiplexing and greater connections per host won’t be enough to bring this about.
Authenticating Services Again, Not People
So now that NGINX finds itself with what it believes to be a mature, production-ready iteration of HTTP/2 support, it also must adopt an authentication system that won’t serve as a bottleneck for microservices. For this reason, the firm is introducing what it calls an “initial release” of support for OAuth2, which adds new workflows for authenticating huge blocks of Web services from the same source.
“There are very few situations where you wouldn’t authenticate an API transaction,” noted Garrett. “But the challenge that developers have is that authentication is difficult to do right. OAuth is emerging as a very good standard for authentication. It separates the user information from the application, so it allows you to authenticate against one entity.”
The original OAuth allows a major login provider such as Facebook or Google to vouch for the authenticity of a user. But again, that’s a situation involving users.
“Imagine a microservice environment where you have tens or hundreds of individual services, and they all need to know who the user is. It would be infeasible for each of those instances to perform authentication individually,” the NGINX chief explained. “It puts a big burden on the developer, it puts a big burden on the server, and it makes the security context broader and more complicated.”
NGINX’ plan, he said, is to resolve this issue through Release 8 the way Release 7 resolved the HTTP/2 issue. The new server will retain security credentials for microservices, and then only forward authenticated traffic to them. So rather than leveraging Google or Facebook, or something with a profile more like a social network, microservices can trust a source that’s closer to them.
“Typically, your application is deployed in a trusted environment using standard techniques like SSL and PKI. So a microservice instance can trust NGINX as a source of traffic,” he remarked, “and it can pull the authentication data out of the request we send to that service. What that means in practice is that it’s much easier for a developer to identify who is the party that is accessing his API, so that he can then put his own access controller, logging, or whatever [security service] on that, so he doesn’t have to worry about the complexities of whether the authentication was done against Facebook, Google, an end user, or a remote client.”
Persistence Pays Off
In addition to more mature HTTP/2 and the start of something bigger for OAuth2, NGINX Plus Release 8 adds persistent configuration, which Garrett admitted was not a feature on his company’s roadmap for all that long. In earlier versions of this API dating back to 2014, users could make quick and temporary changes to NGINX’ configuration, he said — for instance, to indicate when a server was down for maintenance or taken out of maintenance. But whenever NGINX was reloaded, those temporary configuration changes were dropped.
But in a microservices environment, applications change their entire contexts in very short periods of time. Customers began requesting not only that their configuration changes be made permanent, but that NGINX somehow integrate with their applications’ configuration databases — so changes made there can be reflected in the server immediately.
“Therefore, in this release, we have taken the API and made it persistent,” explained Owen Garrett. “We will publish solutions that show how to use the API with two common service databases, and this will allow users to deploy services in a microservices environment, and for those services to be registered against NGINX Plus in a very, very easy, lightweight, reliable fashion.”
Featured image of the replica of Charles’ Babbage’s difference engine, in the London Science Museum, by Carlsten Ullrich, licensed under Creative Commons 2.5