There has been very little discussion of Microsoft Windows in The New Stack, for reasons that are obvious to its regular readers. Over at least the last five years, perhaps longer, Windows has failed to evolve in most of the relevant categories, and has instead been forced to concentrate on irrelevant and superfluous matters. Since my work began showing up here last December, as my three or four long-time readers would point out, it was only a matter of time before Windows became a topic.
It’s a topic now, because of a realization: To stay relevant in the modern world of work, Windows has to find a place for itself in the new stack (both the publication and the actual stack) even if it has to metamorphose into something else completely. A metamorphosis of sorts is now under way, beginning with last October’s acceptance of Docker, and continuing with last week’s confirmation of various rumors (thanks again, Mary Jo) of a forthcoming minimalized OS called Nano Server.
The forthcoming changes to Windows — both the confirmed ones, and those yet to be made for Windows to catch up with the rest of the world — are such foreign objects to certain publications that they remain bewildered as to where to begin.
They could begin here: The world of servers is in mid-transition to microservices architecture. Microsoft could give up now, or it must steer Windows Server through a stunning course change.
Container A and Container B
Mike Neil, the long-time Microsoft veteran now serving once again as Windows Server general manager, portrays the move to Nano Server as the next logical step in the evolution of virtualization — the culmination of a 2011 research project code-named “Drawbridge.” While Microsoft may have had many of the right ideas, Linux probably started the whole conversation with the rise of control groups in 2010.
In a conversation with The New Stack, Neil acknowledges Windows Server was effectively adding a capability that Linux brought to light. He explains, however, two types of containers that a new version of Docker Engine will support in Windows Server: the so-called “Windows Server Containers” (effectively Docker for Windows), and a second form called Hyper-V Containers.
Why does Windows Server need two types of containers that can both be recognized by Docker Engine?
“The Hyper-V Containers provide an additional layer of security capability,” Neil tells The New Stack. “They provide a boundary that’s supported by the underlying hypervisor.”
Applications will not find themselves needing to be developed for one class of container or the other, remarks Neil. The differences are configurational and perhaps largely aesthetic, dealing with how they relate to the underlying operating system.
“One of the challenges with containers today, both with Linux and Windows Server,” he tells us, “is that you’re sharing a significant portion of the operating system between the base image and the image running within the actual container. So if I upgrade that operating system, then I run into this challenge of needing to upgrade the container, because those two things are in a shared environment.”
The new and second-class Hyper-V Container enables situations where different versions of the base images can be running simultaneously. It speaks to Microsoft’s need at this point to resolve those issues that are peculiar to the architecture of Windows, in a way that still enables Windows applications to run without being completely redesigned from scratch.
“This is pretty important for enterprise customers especially, who are trying to deploy into their environments where they want to have independent lifecycle management,” he says, “for patching, updating, and compliance reasons. And also, it provides a security boundary that they know and trust today: Hyper-V Virtualization Security, which is based off Intel’s VT technology.”
How would Windows Server admins accomplish this independent lifecycle management to which Neil refers — a task with which they’re already pretty familiar — in a world newly infused with hundreds, maybe thousands, of new Nano Servers running in parallel? You can’t just expect to address each one sequentially using one of Microsoft’s famous “Wizards.”
Neil says he’s noticed how DevOps practitioners in the Linux field utilize lifecycle management to maintain multiple instances of containers, even as they’re actively being developed.
“The thing that we realize is that there are significantly more operators in the world than there are developers,” he notes.
As applications move more toward an operational role, under lifecycle management, they become less actively developed. This gives rise, in Neil’s view, to two classes of configurations, which would be maintained through a type of lifecycle manager.
The State Projector
Here is where we discover the role Microsoft perceives for the next version of Windows Server in this environment: as the platform for the lifecycle manager, especially for the new Hyper-V Container class. From here, a DevOps team would oversee the parallel containers for applications during what I’ll call the “gestation period,” and transition their configurations to a kind of maturity mode as they become ready for a production state.
This level of oversight may be necessary, among other reasons, because a Hyper-V Container will not be a VM as we have come to know it — not even like a Docker container.
“It is a very different container,” explains Mike Neil. “From an operating environment that is projected inside of that world, it is different from a virtual machine. A virtual machine, as its name implies, has its abstraction layer at the software/machine interface. It projects into that environment a virtualized CPU; virtualized memory; storage at a very low level, typically a block storage solution; and networking, typically at a packet level. Whereas Hyper-V Containers are really designed to provide the same abstraction layer that you have with Windows Server containers, and virtualize further up the stack in the network and further up the file system. So they provide a very different construct.”
Neil implies a relationship between a Hyper-V Container, a separate Hyper-V virtualization layer which a Docker container does not have, and the existing Hyper-V hypervisor of Windows Server. As you may have already surmised, this makes this new class of containers very Windows-specific, or at the very least, designed in such a way that would restrict it from being portable outside the boundaries of Windows Server.
There may be sound architectural reasons for this strange kind of “bound” container, having more to do with the design of existing Windows applications than the evolution to microservices. Microservices are designed to be stateless, and to assume nothing about the state or context of their broader application. They’re not expected to be users of an external database.
Such is not the case at all with the typical Windows application, which relies on one of the most cumbersome databases ever contrived: the System Registry. In a world where applications were rooted to their operating systems, and could expect to share the processor only amongst themselves, the Registry was designed to record the configuration data and sharable states of all applications installed in the system.
Only one year ago, after having seen a thorough demonstration of stateless architecture in microservices, I concluded that a containerized Windows would be impossible for this reason alone. Microsoft, in an effort to prove me wrong, has been busy devising a way to make a non-containerizable application (now, there’s a word for Norm Crosby) containerized … if only virtually.
“There’s a companion piece of technology here: our Desired State Configuration mechanism,” Microsoft’s Mike Neil informs me. “It’s out as part of PowerShell today. Instead of relying on the Registry and creating an image where I implicitly set all those values and configure it the right way and then use it, DSC allows us to explicitly call out what those things should be. Then when you fire up a container and fire up Nano Server, you can run a DSC script against that, and it will reconfigure that image to go to that desired state.”
Microsoft has a knack for naming esoteric server technologies like euphemisms from a sci-fi novel, so Desired State Configuration sounds straight out of Philip K. Dick. It wasn’t created necessarily for containers, but if it works, it could foreseeably enable scalable services (maybe no longer “micro-” but services nonetheless) to perceive the piece of the Registry that pertains to them, as a kind of illusory projection on the part of an active agent.
If it works, it could enable existing Windows applications to be scalable in elastic cloud architectures. But as Mike Neil’s revelations made clear to us, that’s a huge “if.”
The phrase Mike Neil omits from much of this discussion is “orchestration layer.” Obviously Windows Server will find itself requiring counterparts for the tools with which the open source community has already been availing itself, such as Mesos for orchestration and Aurora for scheduling.
But because these tools are open source — especially because they’re Apache projects — they’ve had the benefit of contributions from implementers who’ve successfully made adjustments for their own purposes. Windows Server has always been a closed-door project. And although Docker Engine remains open source and overseen by the community, the Windows Server client for Docker Engine will be contributed by Microsoft.
What has yet to be determined, even after the new server systems have already been announced, is how Microsoft perceives the relationships between itself and developers, and between itself and DevOps professionals, as this new Docker-centric operating system becomes a commercial reality. Professionals are more accustomed these days to contributing to the process. Microsoft likes to host conferences for professionals — it plans to host two, Build and Ignite, in just a few weeks’ time — and has hosted open source projects in the past, but never with the name “Windows.”
Microsoft could very well continue under the presumption that the solutions to this and other architectural issues can only emerge from deep within the caverns of its research labs (as was supposed to have been the case with “Drawbridge”). If that ends up being the case, then no matter what changes are made to Windows Server, businesses accustomed to open source will no longer wait patiently for their allotted updates.
They’ve already had their revolution, after all. The new stack is already here.
Feature image via Flickr Creative Commons.