How Firecracker Is Going to Set Modern Infrastructure on Fire

7 Dec 2018 9:29am, by

One of the most exciting announcements from last week’s AWS re:Invent was Firecracker — an open source project that delivers the speed of containers with the security of VMs. It’s the same technology that Amazon uses for AWS Lambda and AWS Fargate, and it has the potential to disrupt the current container and serverless technologies.

As someone with a keen interest in the evolution of modern infrastructure, I am intrigued by Firecracker. As soon as I got back from re:Invent, the first thing I did was to install and run the software. It was extremely satisfying to see 100+ microVMs running in my own MacBook Pro. I will walk you through the steps involved in setting up the same in your environment without the need to provision an i3.metal instance in EC2.

Containers vs. Firecracker

Simply put, Firecracker is a Virtual Machine Manager (VMM) exclusively designed for running transient and short-lived processes. In other words, it is optimized for running functions and serverless workloads that require faster cold start and higher density.

Why can’t we use containers? Of course, containers can be used for delivering Functions as a Service. But the real tradeoff is the isolation provided by VMs. LXC and Docker are certainly faster and lighter than full-blown virtual machines. But, containers are considered to be less secure than VMs because of the relaxed isolation levels. Also, the size of Docker images may negatively impact the startup time of functions.

To address the security aspect, platform companies such as Microsoft and VMware advocated the architecture of one VM per container. Microsoft’s Hyper-V Containers and VMware’s vSphere Integrated Containers are examples of this design. Intel recently merged its Clear Containers project with OpenStack for the Kata Containers initiative, which follows the same approach of single-VM containers. All these are attempts to get the best of both worlds — containers and VMs.

On the public cloud, we have examples of this architecture in the form of, Azure Container Instances, AWS Fargate, and Google Cloud Serverless Containers.

But none of these attempts came close to the startup and execution speed to AWS Lambda. On the other side, there are multiple serverless projects such as Apache OpenWhisk, Kubeless, Project Fn, Fission that are built on container infrastructure. They suffer from the same challenges the single-VM containers have.

I personally think that containers and serverless technologies are orthogonal to each other. They are designed to solve a very different set of problems. Attempting to deliver serverless infrastructure based on containers may not be a viable option in the long term.

All the projects that are implementing serverless based on containers should embrace Firecracker wholeheartedly. It complements containers so well, and the best thing is that it can be managed by Kubernetes. We will explore this idea in the later parts of this series.

Behind the Scenes of Firecracker

Firecracker takes a radically different approach to isolation. It takes advantage of the acceleration from KVM, which is built into every Linux Kernel with version 4.14 or above. KVM, the Kernel Virtual Machine, is a type-1 hypervisor that works in tandem with the hardware virtualization capabilities exposed by Intel and AMD.

The reason why Firecracker deserves the attention is the middle path it took to bring the speed of containers combined with the security of VMs.

In a typical Linux-based virtualization scenario, KVM is complemented by another hypervisor called QEMU that emulates virtual resources such as disk, network, VGA, PCI, USB, and serial/parallel ports to the guest OS running within the VM. QEMU is a type-2 hypervisor running in the userland that is capable of delivering virtualization by itself. But it has to translate every system call that needs to run in privileged mode. This translation will dramatically slow down the user experience and the overall performance of VMs.

Instead of owning the translation and emulation of privileged system calls, QEMU relies on KVM to accelerate those calls all the way to the physical CPU, which already supports Intel’s hardware-assisted virtualization in the form of Intel VT-x. This architecture is what is commonly found in today’s hypervisors and virtualization technology.

As you can clearly see, there are three players in delivering faster virtualization to a guest OS — QEMU, KVM, and hardware extensions.

Here comes the most interesting part about Firecracker — it simply replaces QEMU as a minimalistic virtual machine manager that provides the most critical virtual resources needed by the guest. The remaining two layers — KVM and hardware-assisted virtualization — remain the same providing the acceleration. Firecracker runs in the userspace while talking to KVM embedded in the kernel.

If what you just read sounds fascinating, you should explore the themes of Intel Ring architecture, the evolution of Xen hypervisor, the differences between type-1 and type-2 hypervisors, paravirtualization vs hardware-assisted virtualization, the motivation behind building KVM along with the factors that led to enabling hardware-assisted virtualization by Intel and AMD.

Unfortunately, that’s a lot of ground to cover in just one article. But when you understand the evolution thoroughly, it will make you appreciate the efforts put by the Firecracker team.

According to the official FAQ, Firecracker is a cloud-native alternative to QEMU that is purpose-built for running containers safely and efficiently, and nothing more. It provides a minimal required device model to the guest operating system while excluding non-essential functionality (there are only 4 emulated devices: virtio-net, virtio-block, serial console, and a one-button keyboard controller used only to stop the microVM). This, along with a streamlined kernel loading process enables a < 125 ms startup time and a reduced memory footprint.

The microVMs launched by Firecracker are extremely transient and short-lived. You can only access them through UART/serial console because they don’t even run SSH. Apart from the serial console, these microVMs may be connected to a virtual NIC, a block device and a one-button keyboard. That’s pretty much what you can attach to the VM. This minimalistic design of the VMM makes Firecracker extremely fast. According to the official claims, Firecracker initiates user space or application code in less than 125ms and supports microVM creation rates of 150 microVMs per second per host.

The Firecracker process exposes REST API via a UNIX socket, which can be used to manage the lifecycle of a microVM. The architecture is very similar to Docker Engine for exposing the control plane API. While there is no CLI yet, cURL can be used to send the payload to the Firecracker REST endpoint.

Each microVM runs as a process within the host OS, which is associated with a dedicated socket and API endpoint. The VMs also support EC2-like metadata at well-known endpoints that can be used for service discovery and storing arbitrary data as key-value pairs.

AWS has included a Jailer that secures microVMs by providing additional security boundaries through cgroup, namespace, and seccomp isolation.

Written in RUST language, Firecracker currently runs only on Intel processors with support for AMD and ARM in the pipeline. When it gets ported to ARM, I can see how this technology can change the face of IoT deployments. Hobbyist devices like Raspberry Pi and industrial-grade devices running ARM Cortex processors will be running microVMs containing code to acquire data from the sensors or to control actuators. That will fundamentally change the way the Internet of Things and Edge Computing is handled today.

In the next installment, I will walk you the steps to set up and configure Firecracker along with an overview of the roadmap.

Feature image via Pixabay.

Updated on 8th December with inputs from subject matter experts.

A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.