For 25 years or so, some form of Microsoft Windows has been the principal operating system in most of the world’s data centers. And this holds true even today, even as we speak of bold, broad, mathematical ideals such as containerization and continuous integration, as though Windows never happened.
Yet amid the many layers of this new stack we like to glorify resides an unavoidable truth: Although the world’s server infrastructure today floats atop a simmering sauce of Linux, tucked inside that infrastructure like the filling in a ravioli is a pocket of virtualization. Inside that pocket is a self-contained world of Windows, launching (and crashing) monolithic applications as though Docker were one-half a pair of Levi’s slacks.
So if adopting Windows into your distributed workflow, or la pile nouveau, feels to you like swallowing a foreign object, imagine what how a veteran Windows developer (for example, this poor guy) must feel to be told that the software architecture of the near future resembles microservices, involves containers, and may be managed with an orchestrator.
That is exactly what’s happening.
“I have a monolithic application today. It doesn’t scale to bigger machines. I want to move that to a more cloud-native, adaptable, scalable application model. But how do I do that?” rhetorically asked Taylor Brown, Microsoft’s lead program manager for Windows Containers.
“I’ve got three million lines of code, and I’ve got a business running on them. What containers are enabling dev teams to do is make changes to that application without as much fear of breaking it, so they can start to peel out the layers. We’re calling this the ‘lift/shift/modernize paradigm.’”
It’s a three-stage agenda, the first stage of which is moving existing Windows monoliths onto a platform capable of also supporting something else. In the second stage, the monolith is disassembled in a manner more in tune with continuous integration and deployment.
In the third stage, systems are weaned from their reliance upon the old, UI-dependent Windows Server. It’s a delicate operation: a plan to break applications from the very chain of dependencies upon which Microsoft’s entire marketing scheme was once predicated. And it’s Microsoft, most ironically of all, that’s behind its execution.
“From a container orchestration perspective, one of the things I think a lot about is decoupling the problem. And that makes these problems more tractable,” stated Brendan Burns, who presently leads the Azure Container Service development team at Microsoft.
“If you can decouple the operations experience from the details of how a particular piece is implemented — even what operating system a particular piece happens to be implemented on,” Burns continued, “that makes people’s lives dramatically easier. I would hope that we could reach a place where the person who’s rolling out a complete application that’s developed by a bunch of different teams, may not even know the operating system that that particular image was built with.”
The first problem containerization solves for Windows is the same as the first one it solved for Linux: scalable, manageable deployment.
Microsoft’s Taylor Brown described for us the typical deployment scenario for a .NET application in the days before Microsoft’s Windows Server Core project was officially launched: “In my history, I worked on a deployment team where we had to deploy a three-tier application. Landing that on a machine, I needed to know if that machine was properly configured. Did it have the right credentials on it? Did it have the right network infrastructure? How do I talk to a database? What’s the database’s name? We’d have huge XML files, where in this environment, the SQL Server was named that, and the web front-end was this.
“And more often than not, we’d have a deployment failure. Someone would mess up a name in one environment or another, and that would result in rollbacks — and rollbacks were even more painful than deployments.”
So Brown leads the team charged with simplifying Windows application deployment. There’s just one catch: In its modernized form, a new Windows application will no longer be reliant upon the old libraries and foundation classes. These aren’t just dependencies; they’re the basis of how these old applications were designed.
“A lot of these companies have a lot of intellectual property in the software they have built over the years and years,” said Burns, “and they want to preserve that. They don’t want to rewrite that. Maybe there are unique capabilities: [maybe] they’re using C++ libraries that were only built for Windows. Or their development environments — they really like Visual Studio and C#. So this enables them to take those tools that they know, and bring them into a microservices world.”
But to accomplish this, it introduces the Windows world to a strange and foreign object: orchestration.
“Systems like Kubernetes and containers provide a lot of automation for free,” said Burns. “Deployment workflows, health checking, automated restart, load balancing — a lot of the pieces that allow you to build a reliable, distributed system, regardless of what operating system your application happens to be running in.”
Until June 2016, Burns was Google’s lead engineer for the Kubernetes container orchestration engine. And now, with the help of partner firm and commercial Kubernetes vendor Apprenda, Microsoft is actively pursuing research into a once-unthinkable prospect: enabling Kubernetes to orchestrate Windows and Linux applications simultaneously.
“We want Windows developers to start getting into the same frame of mind where Linux developers are,” declared Michael Michael, Apprenda’s senior director of product management, speaking with The New Stack. “Distributed apps, microservices, applications that are thin, agile, elastic; that understand the fact that they can be scalable and that could have many interconnected components; that understand that they can fail and need to handle that failure gracefully and remediate themselves — we want Windows developers to start developing apps the same way.”
It’s about as plainly stated, though as imponderable, a request as if a petition to the citizens of the state of Texas to drop their meat dependencies and become vegetarian.
“Part of the reason why some of these Windows apps are, for lack of a better word, heavy — in terms of functionality, size, requirements, and so forth — is because of legacy infrastructure requirements, or other legacy applications that they need to connect to,” explained Michael. “The other part is based on what the thinking about the architecture of those applications was, five or ten years ago.”
For their agenda to succeed, Apprenda and Microsoft are betting on one factor to remain the same as it always has been: the pedantic, deliberate, over-measured pace with which enterprises have adopted new versions of Windows Server. The newest one today — Windows Server 2016 with support for Windows Containers and Hyper-V Containers — may take at least two years before it attains parity with all the other Windows Server versions in the field.
In the sustaining interval, Apprenda’s Michael believes there’s an opportunity to re-educate Windows developers on how to produce real microservices, using container-ready foundations such as .NET Core, and implementing hybrid cloud platforms such as Azure Stack that will presumably, hopefully, imperatively, have already been released by Microsoft.
“A traditional IT staff does take a while to migrate to the latest and greatest Windows Server,” observed Apprenda’s Michael (whose signature is “M2”). “However, with advancements and the amount of servers being deployed on public cloud infrastructures like AWS, Google Compute Engine, and Microsoft Azure, the IT staff, and the environments and workloads being deployed on those servers, don’t carry the same traditional delays that you see in enterprise IT.”
At the Cloud Native Computing Foundation’s KubeCon conference last November, Michael gave one of the first public demonstrations of an implementation of Kubernetes on Windows Server, staging and managing Windows Containers (one of two classes of containers from Microsoft, this one based on Docker and not requiring the Hyper-V hypervisor). Despite some visible resistance from the Demo Gods, Michael did show a .NET Core-based Web app — a guest book function downloaded from GitHub — not only running but scaling up.
The greater goal, however, is for Kubernetes to stage both Windows and Linux containers simultaneously, regardless of the OS that’s hosting the orchestrator.
“The transition that’s going to happen here,” Michael told us, “is not that much different than the one that started from physical servers to virtualization. Certainly, there are types of skills that are going to be required by some folks who operate this hybrid environment, that are going to change a little bit. They’ll have to learn some new technologies — Kubernetes, Docker, Azure Container Service, and technologies similar to that.
“Our sector of the IT landscape — and I’m using ‘our’ here fairly freely,” he continued, “is transitioning every few years to new technologies that require folks to pick up new skills. And the responsibility lies here on the Kubernetes community to make sure they have adequate training and knowledge available to these folks, to pick up the new technology, understand it, and be able to use it.”
Accomplishing this, Michael went on, requires a certain amount of leverage of existing tools and environments to carry developers into this new way of work. From the very beginning, leverage has been Microsoft’s clear specialty. But even with the tools and human power at its disposal today, this transition may be the tallest order it has ever undertaken.
The Cloud Native Computing Foundation is a sponsor of The New Stack.
Feature image: A block diagram of a typical enterprise network as Microsoft perceived it, from a demonstration presented at Microsoft’s PDC 2005 conference in Los Angeles, September 2005.