Events / News /

Finally, Linux Containers Could Run on Windows with Docker’s LinuxKit

18 Apr 2017 12:54pm, by

The one thing Windows containers have not done since they were first demonstrated two years ago, is run an application built for Linux, in a container built around Linux, in a Windows environment. What would be necessary for such a feat would be a so-called Linux subsystem. And while Microsoft did release such a thing one year ago, strangely, it did not resolve the container portability question.

April seems to be Docker’s and Microsoft’s month for making headway together. Tuesday morning at DockerCon 2017 in Austin, Docker Inc. Chief Technology Officer Solomon Hykes announced the launch into open source a set of tolls for building out Linux subsystem, jointly produced by Docker Inc. and Microsoft, under Docker’s stewardship. Called LinuxKit, it will provide just enough of a Linux-based platform at a layer beneath the application, for a container to run a Linux-based application on any operating system platform, including Mac OS and Windows; on any major cloud platform, including AWS; or, amazingly, on bare metal.

“Linux is obviously a secure operating system,” Hykes declared. “It didn’t need our help for that. But, here’s the thing: When you make the assumption that everything is a container, then you can take security to the next level. You can make a lot of assumptions. For example, you can make specialized patches and configurations that really harden the system even further.”

It took Hykes some time to get around to substantiating his point, but his substance may yet convert some of those winces into googly-eyes. When containerization runs in a self-contained universe by some other host — say, another Linux OS, or within a first-generation VM — all the most serious potential vulnerabilities concern the transactions that take place across the boundaries between host and guest.

In an environment where all the systems are containerized, no such boundaries exist.  Therefore, any component that exclusively addresses the issue of container security and authenticity — for example, Docker Notary — may be leveraged as a security gateway for the underlying systems as well.  Imagine role-based access to the container that has the Linux subsystem in it, and you get the idea.

“We don’t think Docker should take the responsibility for securing your Linux subsystem,” admitted Hykes. “In fact, we don’t think any single company should take sole responsibility for that.  Linux is too big, it’s too important. So it’s really, really important that the security process you rely on be open and community-driven. And there’s been a lot of really good work in the Linux community over the last few years to build that.  Instead of doing everything in our little corner, we’re joining and participating in these open processes from day one.”

It’s a return to not just the “open,” but the inviting spirit that Docker Inc. had adopted in 2015, but which it was appearing to abandon last year when the company fused Docker Swarm into its enterprise-grade platform. Indeed, Hykes spoke of the container ecosystem at large once again as a community of components to which Docker Inc. has limited contribution, no single one of which forms the axis around which everything else revolves.

It is the company’s clear strategy to restrict Kubernetes’ — or anyone else’s — claim on the container ecosystem at large. And it is not lost on anyone that standing next to Docker on-stage for the revelation of this strategy, is Microsoft.

“As you know, Docker started out as a project targeting Linux,” stated John Gossman, representing Microsoft’s Azure core development team. “Docker combines some complex kernel features into a simple-to-use development experience that we all love. And the Windows team wanted that same developer experience for Windows developers.”

Gossman told the story we all know about why Microsoft developed Windows containers (now called Windows Server Containers) and Hyper-V containers, as two separate formats. The former shares the Windows kernel the way a Linux-based Docker container shares the Linux kernel, while a Hyper-V container enables a guest OS to run a native application on a different host platform, through a virtual machine. For that reason, for Windows Server 2016, Microsoft created a kind of isolation for Windows-based Hyper-V containers, that minimized the footprint of that VM.

By incorporating LinuxKit, Gossman said, this isolation is being extended to Linux-based containers.

What Gossman’s demo proved well enough was the theoretical capability for a developer on a Windows platform to use Windows tools to develop, build, assemble, and run a Linux application within a Linux container image inside a Hyper-V isolation wrapper on a Windows platform. His demo involved an image of the BusyBox embedded Linux container, and Gossman proved that its Linux kernel was active and running on his Windows laptop. It also demonstrated that any Linux — not just enterprise-grade, not just a minimized kernel like CoreOS — could serve as the application’s host.

Certainly, there had to have been more than a few developers in the audience who contemplated the possibilities of running SELinux as that isolated kernel.

“Your platform is only as secure as its weakest component,” noted Docker Inc.’s Hykes, in so doing turning the whole argument of Docker’s leveraging of security responsibility upon Linux on its head.

“You’ve got to worry about every single layer in there,” he continued. “And Docker has a lot of components. We’ve been worrying about the security of everything, and how it fits together.”

Earlier in his keynote session, Hykes made the case that Docker’s entire evolution has been “complaint-driven.” Developers complain, engineers assemble and make corrections, and the platform improves. Developers did complain two years ago, loudly. If history is to prove Hykes correct, Docker Inc. will need to be remembered for how it responded to this complaint.


A digest of the week’s most important stories & analyses.

View / Add Comments