Microsoft’s Lightweight OS and its Deep Linux Connection
Nine years ago, when a distinguished Microsoft engineer introduced to his company an entirely new kind of scripting language called PowerShell, many of his own superiors didn’t even know what it was or what it meant. It was the very beginning of a long, drawn-out revelation: that huge, monolithic operating systems would not scale well in modern data centers.
PowerShell triggered the creation of what was, for Microsoft, an alternative command-line-only version of Windows Server, called Server Core. The Windows Server that Microsoft had preferred we deploy throughout data centers was rooted in the same graphical services that drove Windows clients. It was too heavy a load for Windows to carry, in a world that needed nimbleness.
So now Microsoft is making its own counterpart of CoreOS, the startup that has developed a lightweight operating system, modeled after the ChromeOS browser. Like Chrome, CoreOS updates automatically, meaning the developer has an OS that does not require manual upgrades. The company describes it as an operating system for Linux server deployments on a massive scale.
Microsoft calls its technology Nano Server. It’s geared for a world of microservices and portable process containers. At its Ignite 2015 conference in Chicago, company engineers were demonstrating a cross-platform orchestration platform — Operations Management Suite — not the System Center to which Windows Server has traditionally been bound, but an overseer and a portal for an analytics engine that scans logs in real-time and produces graphical dashboards.
And it’s producing an extension to Windows Server, called Azure Stack, that instead of hybridizing on-premise storage, compute and networking onto a public cloud, it hybridizes public Azure-based resources by extending them on-premise. Azure Stack lets organizations manage on-premise services with the same tools, at the same time, as off-premise.
What company is this again?
Jeffrey Snover, the PowerShell creator who now finds himself in the role of Windows Server’s lead architect, found himself at Ignite 2015 Tuesday morning explaining Nano Server to members of the press, some of whom may have been introduced to the concept of microservices architecture for the very first time.
Snover explained that for now, Nano Server will be geared toward two services profiles: one which deals with cloud OS infrastructure, such as clustered hypervisors and clustered storage, and the second dealing with cloud-native apps. Although Azure was one of the first PaaS services for deploying apps to the cloud, here Snover is referring to a new class of apps (for Microsoft) that will be both developed and deployed on Azure, within a new Azure-based development environment — outside of the conventional client-based Visual Studio.
It’s these new apps which will serve as Windows developers’ entryway to the world of containers, which is something else Snover had to explain from the beginning.
“Containers are a new way to be able to run things,” he explained, with the sweeping hand gestures of a TV meteorologist that helped make him so popular at Microsoft conventions. “Going forward, server applications will be written to two profiles: the Nano Server profile, which is cloud-optimized; and then there’s the Server profile that’s focused on maximal compatibility.”
Developers writing for the Nano Server profile will be guaranteed of compatibility with pre-existing Windows Server installations, because Nano Server is effectively a subset of Windows Server. Until developers become more accustomed to the concept of microservices, there may be a significant adjustment period. Windows developers are accustomed to having libraries of pre-existing functionality available to their code in a global scope, and the relationships between libraries and the client code that calls them are closely coupled. With microservices, there is no global scope.
“If you write an application to Nano Server or to ‘full server,’ the question is, where do you run it? You run it in a physical host, a virtual host, or in a container,” explained Snover. There will be two types of containers for both Windows Server and Nano Server: the same Docker containers developed for Linux, and a type developed by Microsoft for its own hypervisor platform, called Hyper-V Containers.
“These provide additional isolation,” explained Snover, “and they’re really used for things like multi-tenant services, or multi-tenant platform as a service, where you’re going to be running code that might be malicious that you don’t trust.” The concept is based on a type of technology that was the subject of several Microsoft Research experiments in the last decade, called “Drawbridge” — a kind of containerization (now that we know what to call it) mainly for purposes of process isolation, sandboxing untrusted apps that could crash the system.
The Long Road to Decoupling
In its original inception, Nano Server was designed to be managed from a remote instance of PowerShell, using the verb-object syntax Snover dubbed “cmdlets” (command-lets). Tuesday, Snover showed off a visual portal which lets browser users monitor and manipulate a Nano server instance directly.
I asked Snover and his Microsoft colleague, Windows Server General Manager Mike Neil, whether at some point they plan to adapt their Nano Server profile to run processes in a more microservice-oriented fashion, like CoreOS for Docker on Linux.
“The model that we have is very much like containers within the Linux world,” responded Neil, “where we’re doing operating system-level virtualization. You have a shared-kernel infrastructure for all the shared containers on that physical machine.”
With Linux, Neil said, when you upgrade the kernel, you subsequently upgrade the containers to match, which may cause compatibility issues. They would create issues with processes written for Windows Server that will certainly have underlying dependency issues. While Windows Server containers (Docker containers) will run on a shared infrastructure, Hyper-V containers will allow a different base image for each container.
“So we’ve actually jumped a little bit ahead of where the Linux community is on this,” remarked Neil, “by providing both of those models.” He gave a nod to Canonical’s recent efforts to some of those same ends, with its LXD system.
To which Jeffrey Snover added, “I think containers are a deeply disruptive technology. With a disruptive technology, it takes the community a while to figure out where its natural strengths and weaknesses are. I think you’ll see a lot of people, initially, try to treat them like lightweight virtual machines. And I think, over time, you’ll see more people adopt the model of microservices, where the container is more like a process and less like a VM, and you have lots of them.”
Snover went on to stress the importance of decoupling microservices from the environment in which they run — a concept which, only a few years earlier, would have been a seemingly sacrilegious topic to discuss openly at a Microsoft conference.