Windows Server Chief Mike Neil: Microsoft’s Tipping Point for Containers
In every installation of Windows there is a local database called the system registry. Back in the era of Windows NT, this was an innovative thing: a single file that recorded all the states and preferences of all installed applications, that listed the hooks and handles for all the libraries handling remote procedure calls, and that provided all the actions (the “verbs”) that a program could do to a file from outside the application.
The Registry made Windows into a switchboard handler. It made operating systems necessary. And it has come back to bite Microsoft, because it ties apps to the devices upon which they’re installed at a fundamental level.
Raising the Drawbridge
Microsoft’s move away from localized architecture began as far back as 2005, but proceeded slowly. One of its engineers created a tool called PowerShell that made it obvious servers needed to be administered remotely via a command line. While some in the company declared the tool revolutionary, others flatly denied its existence.
Eight years ago, Mike Neil led a team of engineers driving Windows Server to adopt virtualization at its core. But following a pattern Microsoft would continue into Windows 8, he was forced to ship products omitting features that weren’t ready for prime time, rather than wait until they were.
Now the revolutionaries are the ones in charge, with PowerShell creator Jeffrey Snover becoming Windows Server’s lead architect, and Mike Neil becoming the product’s general manager. They have made the U-turn, partnering with the open source community, and actually pledging to include open source components with the next version of Windows Server. They share their news with the community and with customers now quite openly, so much so that the company’s representatives often seem unsure of what they’re talking about.
At the company’s Ignite 2015 conference in Chicago last week, I asked Mike Neil when it first became clear to him that containerization was the move that Windows Server had to make.
“It was probably about two-and-a-half years ago,” Neil responded. “We developed that technology [Drawbridge] and used it on a couple of our internal services. I think that success that we had led us down that path.”
“Drawbridge” was a kind of virtualization-driven isolation mechanism created by Microsoft Research, originally as a testing mechanism. Its design enabled processes that its creators dubbed “picoprocesses” (“microservices” sounds too much like the company trademark) to run on minimalized kernels addressed remotely by way of APIs. Without even realizing it at first, Microsoft was creating a containerization architecture.
“Drawbridge was designed as this sort of security layer that was deeply embedded at the kernel level,” said Neil. “VMs have a very low-level interface with the CPU and memory and disk and networking, as the abstraction layer. Drawbridge sort of moved it up, but it was still very far down into the kernel.
“What we realized was that this was still too low in the stack. So we did the container technology that’s [now] Windows Server containers, and which Hyper-V containers are based on as well, and we pushed them further up in the stack — file system layers, networking layers — within the OS. We learned from that experience to build what you see demonstrated as part of Windows Server now.”
Neil concedes that the crux of the job — recomposing the entire Windows Server ecosystem as services that can run in isolation, while the Windows 10 team devised yet another “universal” platform for client apps — remains his team’s most pressing — even “painful,” to use his word — challenge. He acknowledges the questions that remain on the table, and for now, leaves them on the table for everyone else to see as well.
“Within a Windows environment, a lot of the .NET-era applications were fairly vertically integrated. How do you break down those applications, compose them into a set of services, and use them as the building blocks? How do we take those learnings and make it to what you see nowadays: start a new project in Visual Studio and you’ve got a deployment in Azure with multiple-tiered applications? That’s the challenge for us: Take those things, learn from them, but make them approachable and deployable by the end customer … How do we make these things easily deployed, easily managed, if you don’t go through an .MSI installer, you don’t modify a lot of local state with the Registry, services that have endpoints that are easily discoverable, standards-based REST APIs for communication protocols, and things like that? All those patterns, all those pieces of functionality are things that are relatively new for us.”
Friendship in Isolation
In a highly leveraged, centralized architecture such as Windows was, the failure of a program or a process ran the risk of toppling the entire system with it. Let’s face it, most everything you’ve heard about security “exploits” focuses on the attempt to make a process fail in such a way that enough of the lower levels of the system are exposed, enabling payloads to be delivered and unchecked code to run.
By comparison, in a system full of isolated processes, any single process that fails can be isolated and removed, because the system is protected from any collateral damage. The “domino effect” can’t happen because processes are not chained together. Microsoft realized this in running its test environments with Drawbridge: Why can’t the rest of Windows be structured like this?
The immediate upshot of this realization was that Microsoft could use release features into production when they were reasonably ready to do so, even though the code for those features had not yet been technically perfected, without fear of each and every bug inspiring the creation of another clandestine NSA project.
“Looking back in history, we would build Windows and release it every few years,” said Neil. “And we’d have big conferences like this, and it’d be a good opportunity to go talk to a bunch of customers. We’d go back, put our plans in place, go develop the code, test the code really well, and then release it, and it might be two years after that conversation happened with the customer. And it might be another year before a customer adopted that version of the product, and [was] out there using it.
“The difference you see in services is, you can come to me today and we can have this conversation, you can say you’d really like this feature, we can go back and work on that feature, and once it’s implemented and rolled into the production environment, you’ve got immediate access to it. Then everybody who’s using the service has immediate access to it.”
Two weeks ago, Docker Inc. CEO Ben Golub told a crowd of Windows developers how stunned his company was at Microsoft’s eagerness to adopt Docker containers. Last week, Mike Neil told us part of that eagerness was due to his team’s already having come to the conclusion, prior to even meeting Golub, that Windows Server would adopt containers in some fashion. Subsequently, Neil said, his team met with both the Kubernetes and Mesosphere development teams, for scheduling and orchestrating containerized processes in large environments.
“I think Microsoft’s becoming much more active in the open source community,” remarked Neil, “with contributions like the .NET Framework to open source, got that group of people thinking differently about Microsoft.”
But Neil also sees a certain evolution on the part of the creators of the technologies with which Microsoft is now catching up.
“A lot of those companies are very interested in providing their technologies to the enterprise. The startup world that bore that fruit, and built those solutions, is a good one as well,” he said. “But it’s not the place where, if you’re a business, you’re going to go and make a lot of money. You’re not going to make a lot of money selling a piece of software to Twitter.”
Feature image via Flickr Creative Commons.