Linux containers has been around since the early 2000s and architected into Linux in 2007. Due to small footprint and portability of containers, the same hardware can support an exponentially larger number of containers than VMs, dramatically reducing infrastructure costs and enabling more apps to deploy faster. But due to usability issues, it didn’t kick-off enough interest until Docker (2013) came into picture.
Unlike hypervisor (ex. Xen, hyper-v) virtualization, where virtual machines run on physical hardware via an intermediation layer (hypervisor), containers instead run userspace on top of an operating system’s kernel. That makes them very lightweight and fast.
Containers have also sparked an interest in microservice architecture, a design pattern for developing applications in which complex applications are broken down into smaller, composable services which work together.
Now with the increasing adoption of containers and microservices in the enterprises, there are also risks which comes along with containers. For example, if any one of the container breaks out, it can allow unauthorized access across containers, hosts or data centers etc., thus affecting all the containers hosted on the Host OS.
To mitigate these risks, we are going to take look at various approaches and specifically Google’s gVisor approach, which is kind of sandbox that helps provide secure isolation for containers. It also integrates with Docker and Kubernetes container platforms thus making it simple and easy to run sandboxed containers in production environments.
With this context, now let’s check out various approaches to implement sandboxed containers.
Roundup of Container Isolation Mechanisms
Machine-level virtualization exposes virtualized hardware to a guest kernel via a Virtual Machine Monitor (VMM). Running containers in distinct virtual machines can provide great isolation, compatibility and performance but it often requires additional proxies and agents, and may require a larger resource footprint and slower start-up times.
KVM is one of the best examples for Machine-level virtualization. Recently Amazon has also launched Firecracker, a new virtualization technology that makes use of modified version of KVM. AWS Lambda/Fargate extensively uses Firecracker for provisioning and running secure sandboxes to execute customer functions.
Another notable project based on KVM is Kata containers which leverages lightweight virtual machine that seamlessly integrates within the container ecosystem like Docker or Kubernetes.
Rule-based execution for example seccomp filters, allows the specification of a fine-grained security policy for an application or container. However, in practice it can be extremely difficult to reliably define a policy for applications, making this approach challenging to apply for all scenarios.
To configure the same in Docker, Docker needs to be built with seccomp and the kernel is configured with CONFIG_SECCOMP enabled. To check if your kernel supports seccomp and configured.
grep CONFIG_SECCOMP= /boot/config-$(uname -r)
Docker by default runs on default seccomp profile, to override use –security-opt option during Docker run command. For example, the following explicitly specifies a policy:
$ docker run --rm \ -it \ --security-opt seccomp=/usr/local/profile.json \ hello-world
The default seccomp profile provides running containers with seccomp and disables around 44 system calls out of 300+. It is moderately protective while providing wide application compatibility. The default Docker profile can be found here.
The profile.json whitelists specific system calls and denies access to other system calls.
In the next section, we will look at gVisor (Google’s) approach to container isolation mechanisms.
gVisor is a lightweight user-space kernel, written in Go, that implements a substantial portion of the Linux system surface. By implementing Linux system surface, it provides isolation between host and application. Also, it includes an Open Container Initiative (OCI) runtime called runsc so that isolation boundary between the application and the host kernel is maintained.
It intercepts all application system calls and acts as the guest kernel, without the need for translation through virtualized hardware. Also, gVisor does not simply redirect application system calls through to the host kernel. Instead, gVisor implements most kernel primitives (like signals, file systems, futexes, pipes, mm, etc.) and has complete system call handlers built on top of these primitives.
Unlike the above mechanisms, gVisor provides a strong isolation boundary by intercepting application system calls and acting as the guest kernel, all while running in user-space. Unlike a VM which requires a fixed set of resources on creation, gVisor can accommodate changing resources over time as normal Linux processes do.
Although gVisor implements a large portion of the Linux surface and its broadly compatible, there are unimplemented features and bugs. Please file a bug here, if you run into issues.
How to Implement Sandboxed Containers Using gVisor (for Docker Applications)
wget https://storage.googleapis.com/gvisor/releases/nightly/latest/runscwget https://storage.googleapis.com/gvisor/releases/nightly/latest/runsc.sha512sha512sum -c runsc.sha512chmod a+x runscsudo mv runsc /usr/local/bin
Next step is to configure Docker to use runsc by adding a runtime entry to Docker configuration (/etc/docker/daemon.json)
Restart the Docker daemon post making changes.
Now the gVisor configuration is complete, we can now test it by running hello world container using command docker run –runtime=runsc hello-world
Let us try to run httpd server on gVisor, here test-apache-app would use httpd image with gVisor runtime.
The runsc runtime can also run sandboxed pods in a Kubernetes cluster through the use of either the cri-o or cri-containerd projects, which convert messages from the Kubelet into OCI runtime commands.
Congrats! we have learned how to implement Sandboxed containers using gVisor.
Feature image via Pixabay.