Running a virtual machine (VM) inside a VM might sound inefficient, but hardware-assisted virtualization on recent CPUs means there’s only a moderate performance penalty and this nested virtualization unlocks a lot of options for traditional applications and for applications that mix and match containers and VMs.
Microsoft Azure has the feature live already (it’s part of the Windows Server 2016 feature set that Azure runs on, and it’s available for recent VM instance types in all regions). Google Compute Engine recently unveiled a beta version for KVM and Linux VMs running on instances with Haswell or newer CPUs, and one of the reasons for Amazon Web Services upcoming switch from Xen to KVM for its C5 instances may well be to enable nested virtualization (especially since Amazon contributed some related code to the Linux kernel earlier this year).
The idea isn’t new; when Microsoft bought Virtual PC from Connectix in 2003, it could run its own emulator. But it was Hyper-V Containers — which combine the advantages of Docker containers with the isolation of VMs — that gave the Windows Server the impetus to deliver a feature they’d wanted to create for a decade.
“What we found it’s this very handy, helpful tool, especially for developers, but the use cases for nested virtualization explode very quickly,” Microsoft’s Taylor Brown told The New Stack.
Lift, Shift and Append
“Our primary stack on Azure is all virtualized,” Corey Sanders, Microsoft’s head of product for Azure Compute pointed out. “Some customers have the need to take advantage of some virtualization technologies and capabilities and our host won’t expose those directly.”
The most obvious uses are for QA and for running training labs, where you want to move an existing, virtualized environment to the cloud, without making any changes, and those are the same examples Google gives for nested virtualization, with services like Functionize.
But the same is true of production workloads, Brown said.
“I can move this entire environment into Azure that I couldn’t do before. I’ve got a set of VMs that are preconfigured and I’ve validated them and they work; now I don’t have to worry about changing anything, I just lift the entire image and run it in Azure and all of those VMs will keep working the way they did before. Now I can move entire applications that can’t easily be containerized or lifted and shifted; that whole piece of this application just runs in a set of VMs, nested, even it’s one to one.”
Unlike using, say, Oracle’s Ravello nested virtualization to run VMware workloads on AWS without changes, there’s also an opportunity to extend the app and replace one-off “snowflake” environments by codifying them.
Nested virtualization also gives third-party software vendors a way to package up existing applications and services for the cloud. “We have a partner who had built a solution around Hyper-V Replica, which is an API that enables you to take a snapshot and do live replication of Hyper-V to another Hyper-V instance,” Sanders confirmed. “They want to enable that solution in Azure, and with nested virtualization, they can have their customers deploy a VM inside our VM and take advantage of that technology that’s built into Hyper-V.”
Some customers have been asking for nested virtualization to give them more control over their VMs on Azure. “With nested virtualization, you could create whatever instance sizes you want inside our size and even configure oversubscription as part of that to try and utilize cloud hardware in more efficient ways.” That requires what Sanders tactfully calls “a lot of sophistication” and also sacrifices the simplicity and agility of cloud IaaS.
Containers with the Isolation of VMs but not the Overhead
But the biggest interest from Azure customers is in the way nested virtualization can combine the lower costs and easier servicing model of containers with the isolation and security benefits of VMs, using Hyper-V containers. “The big picture with nested virtualization is going to be containers,” Sanders predicted.
“Especially for those customers who are running some customer-facing apps, some multi-tenant solutions where their end customers have full access to containers, enabling that layer of virtualization protection and isolation is going to be important,” he said.
Not everyone needs that level of protection, but it’s what makes containers viable for those who do, Brown explained. “We’re seeing enough penetration with Hyper-V isolation that it’s clear there’s a business and regulatory need for a well-established isolation boundary.”
“Hyper-V isolation allows us to have a conversation where we say ‘the security boundaries are no different, they are exactly the same as with physical’, but containers are a different way of thinking about a problem space today. The property that VMs offer is they’re heavily safe which is valuable for some apps but really unnecessary for others. So, we can have a conversation about ‘why do you need a stateful tier or not a stateful tier? why does that particular tier need that state?’ We can have a conversation about things for developers like checkpoints; VMs offer checkpointing technology which captures all your memory state. Is that what you need, or do you just need the on-disk state; if all you need is the on-disk state, then would containers be a better fit for you?”
This development mirrors both the shift to containers and their continued evolution, Brown suggested. “VMs were doing fine. They solved a lot of our needs in the cloud, but containers offered a better experience for portability across clouds and the DevOps model. They were a step above that level of abstraction but they still provided much of the same value and that’s part of why they’ve been so attractive.”
As developers started finding gaps in what they could do with containers, new tools were needed. “As we find areas where the answer is ‘we need this other tool for the job’, like statefulness — which is to separate the state — [you get] things like volume drivers and the ability to persist state outside of the container in a very well understood way. So we get the better density offered by containers and the DevOps model but we can still use it for some stateful applications.” Running containers in nested virtualization fills another gap.
Real World Solutions for Specific Problems
Nested virtualization isn’t how Microsoft is providing its new VMware support though (nor the SAP HANA on Azure support it recently announced). “There’s a fair set of capabilities that we have announced on top of bare metal, like SAP large instances and Cray,” Sanders explained. “In all those cases the support on bare metal is tied to being only able to do it on bare metal for a variety of reasons, whether it be network requirements or scale requirements or actual hardware requirements as with Cray. In the VMware case a lot of it was around networking requirements that were necessary to deliver it.”
Nested virtualization doesn’t support broadcast or multicast, for example, and Brown told us the Hyper-V team wants to do more work “improving the memory access and performance.”
Hyper-V containers using nested virtualization on Azure and Windows Server initially supported just Windows containers, but Microsoft has been working on running Linux containers as well (abbreviated as LCOW for Linux Containers on Windows). Windows Server 1709 can run Linux containers with Hyper-V isolation, although the Docker LinuxKit support is still in preview, step-by-step instructions for Ubuntu are here; create the container in a VM with a vCPU, like a Dv3 or Ev3 instance, and it runs with nested virtualization.
The next stage, which is still being merged into Docker, is support for running Linux and Windows containers side-by-side on the same Windows node at the same time (not just mixing Windows and Linux nodes in the same cluster, which you can already do). That will give better latency between the containers, but it also allows organizations to run mixed loads on a single infrastructure and makes it easier for developers to build mixed Linux and Windows applications on a single system. And it gives businesses in regulated industries who want VM-level isolation for Linux containers another option than traditional virtualization.
So far, AWS and GCP only support KVM, and only with Linux. Although Sanders noted that Hyper-V is “the hypervisor we fully understand and fully support,” Azure’s nested virtualization works for other hypervisors as well; Azure’s own networking team uses nested virtualization with KVM to run containerized router and switch images alongside those available as VMs in its CrystalNet network emulation tool — because some network hardware manufacturers only offer images as VMs and some only offer them as containers.
That kind of mix and match, heterogeneous system might well become more common as organizations go beyond migrating applications to the cloud and start extending and enhancing those applications. It’s rare to be able to start from scratch; often the resources an application needs won’t all be available in the same clean, neat, modern form.
“Almost every application in the world, if it was completely rewritten today, would use much different technologies but for almost every customer and application that’s just not a rational or reasonable possibility,” Sanders said.