Hyper, a Hypervisor-Agnostic Docker Engine

Hyper is a set containing a Linux kernel, an init process and management tools, which virtualizes containers to improve their isolation and management for multi-tenant applications.
How Hyper Works: Combining VMs and Containers
Hyper does one thing: Provide isolated environments (virtual machines) on which portable environments (containers) can be easily scheduled. Hyper uses both shared and dedicated kernels environments, believing it is the right approach to deploy multi-tenant platforms.
Hyper has four components:
Guest Kernel (HyperKernel)
- The main component of Hyper. The HyperKernel is a custom Linux kernel.
- It’s able to be run by a hypervisor (currently KVM and Xen, and the list is growing).
- HyperKernel runs HyperStart and HyperD.
Daemon (HyperD) with REST APIs
- The HyperD daemon is run directly on bare metal servers to ensure communication between remote clients and the virtual machines (VMs).
- HyperD is able to communicate with HyperStart and Hyper CLI.
Guest Init Service (HyperStart)
- HyperStart is a tiny init service loaded in an init RAM file system (initramfs) and started by the HyperKernel.
- It launches Docker images.
CLI (Hyper)
- Hyper schedules containers by communicating with the HyperD daemon using the REST API provided.
- Hyper schedules containers directly on a virtualized Linux kernel, removing the need for a guest operating system, its configuration, and all the heavy components it brings.
Hyper’s Approach
Loss of performance is usually associated with virtual machines. Here are some reasons why:
- Emulated hardware is, by definition, slower than bare metal hardware, as the hypervisor needs to “translate” the instructions from the emulated hardware to the real hardware (CPU, RAM, Hard Drive, etc.).
- A guest OS needs to be initialized when a virtual machine is booted — just think about the time it takes to start a machine.
- The background process loaded by the guest OS consumes resources.
Hyper brings a different approach:
- Hypervisors are much more powerful than they used to be, kernels are “virtualization-optimized,” and hardware virtualization is “Hardware-assisted virtualization” (i.e., Intel VT-x gives the guest OS direct access to the CPU). The loss of performance caused by the hardware emulation is then limited, and in some cases negligible. The problem resides elsewhere: the software.
- By using the Linux kernel to schedule containers, Hyper avoids having to initialize a guest OS. The Linux kernel is quite light to load, and can boot in less than half a second.
- HyperD schedules containers directly; therefore, no additional processes are running — strictly the minimum — avoiding consumption of resources.
Hyper’s Lighter Way
Deploying a container-as-a-service (CaaS) platform isn’t straightforward. While containers’ isolation haven’t suffered from critical issues, running hundreds (or even thousands or millions) of containers on the same kernel does sound scary. In the case of multi-tenant applications, a second level of isolation is required.
A typical approach is to build a hybrid solution, with both virtual machines and containers. The workflow is as follows:
First, as a user, you have to build a cluster of virtual machines to run your containers. Then, using a scheduler (Mesos plus Marathon, Swarm, etc.), you can schedule containers within your cluster.
Hyper proposes a lighter way for the building process and workflow:
Easier Management
By removing the existence of a guest OS, and keeping HyperD tight to the Linux kernel, containers can be directly scheduled next to the virtualized kernel. This group of containers scheduled within a virtual machine follows the principle of Pods. Removing the existence of a guest OS delineates the need to build a cluster of virtual machines, and hence the need to configure a “guest OS.”
Performance
Hyper’s performance is much closer to native containers than virtual machines.
Initializing a pod can take less than half a second, and running processes are very reactive. On a standard (Intel Xeon Quad Core, 32GB RAM, 400GB SSD, Ubuntu 14.04 x64) server, performances have been measured as follows:
Pod Startup Time
Running a new pod takes only takes 336 milliseconds (ms).
– |
min(ms) |
max(ms) |
avg(ms) |
startup time |
314 |
366 |
336 |
Memory Usage in Pod
When starting a pod with minimum startup memory, there will be 9 MB of leftover free memory in a running pod, because HyperKernel only takes 11 MB of memory.
– |
min(MB) |
max(MB) |
avg(MB) |
Total |
21 |
21 |
21 |
Used |
11 |
11 |
11 |
Free |
9 |
10 |
9 |
CPU Performances
Allocation of resources: 2 CPU, 2048 GB of memory.
The following table is the result of a sysbench CPU performance test. CPU performances in Hyper are pretty close to the host OS.
target |
num-threads |
cpu-max-prime |
total time(sec) |
resp min(ms) |
resp avg(ms) |
resp max(ms) |
host |
1 |
10000 |
9.88 |
0.95 |
0.99 |
1.01 |
docker |
1 |
10000 |
9.89 |
0.95 |
0.99 |
1.12 |
hyper |
1 |
10000 |
9.92 |
0.95 |
0.99 |
1.28 |
host |
2 |
50000 |
45.81 |
8.51 |
9.16 |
9.39 |
docker |
2 |
50000 |
45.83 |
8.50 |
9.16 |
13.17 |
hyper |
2 |
50000 |
45.97 |
8.95 |
9.19 |
10.22 |
For more details, you can visit Hyper’s performances page.
Best Security
By proposing a second level of isolation, platforms built using Hyper ensure a higher level of security than bare metal container solutions.
Virtualizing Containers … But, Why?
Containers are not virtual machines. Yes, containers are isolated environments within a host operating system, sharing the same kernel and resources. But the kernel itself performs the isolation of containers. Virtual machines are also isolated environments, but they run their own operating system on virtualized hardware.
The main difference is that containers rely on the host’s kernel, while VMs rely on hypervisors, which run their own kernels.
Shared Kernel vs. Dedicated Kernels
In a Linux operating system, the kernel is the part of the system which manages drivers and libraries, and then creates the communication between the hardware and the software.
Sharing the same kernel between multiple isolated environments is actually a switch of context in the guest environment, rather than a proper virtualization. The Linux kernel, through features like namespaces and cgroups, virtually isolates a set of process and libraries, and gives access to the host’s hardware directly. Created environments — being managed directly by the kernel — have the advantage of unaltered performance.
On the other hand, dedicated kernel environments need hypervisors to run. The hypervisor is responsible for emulating the hardware in order to run a guest operating system, which is composed of a dedicated kernel, and a set of tools and libraries. Environments using dedicated kernels usually suffer from performances drop due to hardware virtualization. But they present a big advantage: they provide much better isolation.
The Container Format
We discussed that containers are “virtual environments” sharing the same kernel. But containers are more than that. Today, containers are provided by images. These images are the essence of Hyper. The idea behind “virtualized containers” is to provide highly isolated environments to run highly portable containers. They are a new way to distribute highly portable applications, and are already considered by some as the new package management system.
Docker is a sponsor of The New Stack.
Feature image: “Trying to Reach you … as Fast as I can” by Mohammed Al-SULTAN is licensed under CC BY 2.0.