Container Security and the Importance of Secure Runtimes
Containers have revolutionized how we develop and deploy applications, providing lightweight and portable environments that encapsulate an application and its dependencies. But how do we keep them secure?
One critical aspect to address is the container runtime — the software responsible for launching and managing containers.
While container runtimes like Docker and containerd are widely used, their tight coupling with the host operating system can pose risks. In this article, we will delve into how container runtimes work, why tightly coupled runtimes can lead to host takeover if an attacker escapes a container, and the significance of using secure container runtimes like gVisor and Kata Containers.
Understanding Container Runtimes
Container runtimes orchestrate containers, manage their life cycle, and isolate them from the host and other containers. By leveraging Linux kernel features like namespaces and cgroups, runtimes create a secure boundary around containers.
However, traditional runtimes are closely tied to the host’s kernel, which presents a potential security vulnerability. If an attacker manages to escape a container, they can gain unauthorized access to the underlying host operating system, compromising the entire system’s security.
Tightly coupled container runtimes inherit the security posture and attack surface of the host operating system. Any vulnerability or exploit in the runtime or host kernel becomes a potential entry point for attackers.
This risk is especially critical in multitenant environments or when running untrusted workloads. To mitigate this threat, the use of secure container runtimes, such as gVisor and Kata Containers, is crucial.
Such secure container runtimes provide an additional layer of isolation and security. They employ innovative techniques to enhance the security of containerized workloads.
For instance, gVisor uses a user-space kernel implementation, while Kata Containers leverage lightweight virtual machines. These secure runtimes isolate containers from the host operating system, preventing attackers from gaining unauthorized access to the underlying infrastructure, and mitigating the risk of host takeover.
An Introduction to Popular Container Runtimes
Container runtimes provide the necessary tools and libraries to create, deploy and execute containers.
These container runtimes handle tasks such as creating and managing container images, starting and stopping containers, resource isolation, networking, and security. They form the foundation for containerization technologies and are crucial for running applications consistently across different computing environments.
Here are a few of the most popular runtimes.
Docker is a widely used container runtime that provides a complete ecosystem for building, packaging and running containers. It includes the Docker Engine, which manages the lifecycle of containers, and the Docker CLI, which offers a command-line interface for interacting with containers.
Underneath, Docker uses runC as the default low-level container runtime. runC is responsible for spawning and managing the containers based on the Open Container Initiative (OCI) runtime specifications.
Containerd is an open-source container runtime developed by Docker. It focuses on providing a robust runtime with an emphasis on stability, performance and portability. containerd is designed to be used as a core component in container orchestration systems and can be integrated with higher-level orchestration platforms like Kubernetes.
Similar to Docker, containerd uses runC as the default low-level container runtime to create and manage containers.
Developed by the OCI, runC is a lightweight, low-level runtime that adheres to the OCI runtime specifications. It provides a basic container execution environment by launching containers in isolated sandboxes. Both Docker and containerd leverage runC’s functionality to handle container life cycle management, process isolation, filesystem mounts and other low-level container operations.
CRI-O is a lightweight container runtime specifically designed for Kubernetes. It implements the Kubernetes Container Runtime Interface (CRI) and provides an interface for Kubernetes to interact with containers. CRI-O uses technologies like runc and containerd under the hood.
Secure Container Runtimes: gVisor and Kata Containers
GVisor is an open source container runtime developed by Google. It uses a lightweight, user-space kernel called “Sandbox” to provide a secure execution environment for containers.
Instead of directly running containers on the host kernel, gVisor runs them in isolated sandboxes, adding an additional layer of security and isolation. The sandbox intercepts system calls from the container and applies its own kernel-like implementation, providing a defense mechanism against kernel-level vulnerabilities.
Kata Containers is an open source project that combines lightweight virtual machines (VMs) with container runtimes. It uses hardware virtualization technology to launch a separate VM for each container, providing strong isolation between containers.
Each VM runs a minimal, lightweight guest operating system, such as a stripped-down Linux kernel. Kata Containers aim to offer the performance benefits of containers along with the security and workload isolation of VMs.
Both gVisor and Kata Containers address certain security concerns associated with traditional container runtimes. They help mitigate the risk of container escape attacks, where an attacker gains unauthorized access to the host system by exploiting vulnerabilities in the container runtime or kernel. By adding an extra layer of isolation and security controls, these runtimes provide enhanced protection for containerized workloads.
GVisor and Kata Containers are not mutually exclusive; In fact, it’s possible to use them together, where Kata leverages gVisor as its runtime. This combination further strengthens security and isolation by combining the benefits of VM-level isolation with the additional security measures offered by gVisor.
These secure container runtimes are particularly useful in scenarios where running untrusted or potentially vulnerable workloads is a concern, such as in multi-tenant environments or when dealing with untrusted third-party code.
Running Containers in a Secure Runtime
Using secure runtimes like gVisor and Kata Containers can significantly enhance the protection of your host systems. You can benefit from the following security features:
- Enhanced Isolation. gVisor and Kata Containers provide an additional layer of isolation between containers and the host system. This isolation helps prevent container escape attacks and limits the impact of security breaches within a container.
- Kernel-level protection. gVisor and Kata Containers both protect against kernel-level vulnerabilities. gVisor implements its own kernel-like interface, intercepting system calls from containers and enforcing security policies. Kata Containers leverage hardware virtualization to run containers in separate VMs with their own kernel instances, isolating them from the host kernel.
- Defense-in-depth. By combining the security mechanisms of these runtimes with other security best practices, such as strong access controls, network segmentation and image scanning, you can create a more robust security posture for your container deployments.
- Compatibility and interoperability. Both gVisor and Kata Containers work with container orchestration platforms like Kubernetes, allowing you to leverage their security benefits without significant changes to your existing containerized applications or deployment processes.
Note that while gVisor and Kata Containers provide improved security, they may introduce some performance overhead due to the additional layers of isolation. Therefore, evaluate your specific use cases and performance requirements to determine whether the security benefits outweigh any potential performance impact.
Running Microservices in Secure Container Runtimes
Microservices architecture often involves multiple independent services running on the same infrastructure. By running each microservice in a secure container runtime, you can ensure that they are isolated from each other.
This helps prevent container escapes, privilege escalations and kernel-level vulnerabilities. It can also help limit the blast radius if a security breach or failure does occur.
Container runtimes also allow you to allocate specific resources (such as CPU, memory and storage) to each microservice, ensuring fair resource distribution. This prevents resource contention issues, which could otherwise be exploited by malicious actors to degrade the performance or stability of other microservices.
To run microservices in secure container runtimes, take the following steps.
1. Choose a Secure Container Runtime.
Evaluate different secure container runtimes, such as gVisor and Kata Containers, and select the one that best fits your requirements. Consider factors such as security features, performance impact, compatibility with your existing infrastructure, and community support.
2. Build Container Images Securely.
Use trusted base images, regularly update dependencies, and scan images for vulnerabilities. Implement secure image registries and enforce image signing to verify image authenticity.
3. Secure Configurations.
Configuring your container runtime with appropriate security settings may include enabling isolation features, applying resource limits, setting container network policies and controlling access to host system resources. Follow security guidelines provided by the container runtime documentation.
4. Implement Strong Access Controls.
Implement strong access controls for your containerized microservices. This includes restricting container privileges, employing role-based access controls (RBAC) for container orchestration platforms, and securing container runtime APIs.
5. Continuously Monitor and Log.
Implement monitoring and logging solutions to track the behavior of your containerized microservices. Monitor for suspicious activities, anomalous behavior, and potential security incidents. Centralized logging and analysis can help detect and respond to security events effectively.
6. Regularly Update and Patch.
Keep your container runtimes up to date by applying security patches and updates. This ensures that you have the latest security improvements and bug fixes.
7. Run Security Tests.
Conduct regular security assessments and penetration tests on your containerized microservices. This helps identify vulnerabilities and potential weaknesses in your container runtime configurations and application code.
GVisor consists of two main components: Sentry and Gofer.
Sentry — not to be confused with the monitoring platform also called Sentry — is responsible for intercepting and servicing system calls on behalf of the containerized application. It acts as a kernel-like interface but does not forward the calls directly to the host kernel.
Instead, Sentry services these requests within its own isolated environment. It provides a layer of isolation between the running microservice and the host machine. Sentry makes its own limited system calls, which are closely tied to seccomp rules for security enforcement.
Gofer is the component of gVisor responsible for mediating file system operations. When a containerized application requires access to the host file system, Sentry forwards those requests to Gofer.
Gofer then uses the host machine to perform the necessary file system operations on behalf of the application. This introduces an extra layer of isolation by preventing direct access to the host file system from within the container.
GVisor uses a process called runsc (runsc sandbox) instead of runc for a low-level container runtime. Runsc is specifically designed for gVisor and serves as the interface between the container runtime and the gVisor components (Sentry and Gofer).
It handles container life cycle management, process isolation and other low-level container operations. Runsc interacts with Sentry and Gofer to provide a secure execution environment for containerized applications within gVisor.
Kata Container’s Architecture
Kata Container’s approach of having each container or pod encapsulated in its own dedicated VM provides an extra layer of protection, as each VM has its own kernel that only contains the necessary services for the container workload, reducing the potential attack surface.
In addition to the enhanced security, Kata Containers prioritizes performance and resource efficiency. This minimal footprint makes Kata Containers an attractive choice for organizations looking to balance security requirements with efficient resource utilization.
Kata Containers are designed to be compatible with existing containerized applications and deployment infrastructure, enabling organizations to adopt secure runtime features without significant modifications.
By considering Kata Containers as a secure runtime for your cluster, you can benefit from its excellent isolation, minimal resource footprint and enhanced security, making it a compelling choice for deploying sensitive or untrusted workloads.
Configuring gVisor for Container Security
Here are code snippets for creating a
RuntimeClass object and a pod manifest file that utilize gVisor as the container runtime.
gvisor.yaml (RuntimeClass object)
In this example, a RuntimeClass named
gvisor is created. It specifies the container runtime handler as “runsc,” which is the command used to interact with gVisor.
gvisor-pod.yaml (Pod manifest)
- name: my-container
In the gvisor-pod.yaml file, a Pod named
gvisor-pod is defined. The runtimeClassName field specifies that the Pod should use the “gvisor” RuntimeClass, which corresponds to the gVisor container runtime.
The containers section allows you to define your container configuration, including the container name and image you want to use (replace “your-image” with the actual image name).
Once you have the gvisor.yaml and gvisor-pod.yaml files ready, you can create the RuntimeClass and deploy the pod using the following commands.
kubectl apply -f gvisor.yaml
kubectl apply -f gvisor-pod.yaml
These commands will create the
RuntimeClass object for gVisor and deploy the pod using gVisor as the container runtime.
Please note that you need to ensure that gVisor is properly installed and configured on your Kubernetes cluster for these configurations to work correctly.
Here are code snippets for creating a
RuntimeClass object and a pod manifest file that leverage Kata Containers as the container runtime.
kata.yaml (RuntimeClass object)
In the above example, a RuntimeClass named
kata is defined. It specifies the container runtime handler as “kata-runtime,” which corresponds to the Kata Containers runtime.
kata-pod.yaml (Pod manifest)
- name: my-container
kata-pod.yaml file, a Pod named
kata-pod is defined. The runtimeClassName field specifies that the Pod should use the
kata RuntimeClass, which corresponds to the Kata Containers runtime.
The containers section allows you to define the container configuration, including the container name and the container image you want to use (replace
your-image with the actual image name).
After preparing the kata.yaml and kata-pod.yaml files, you can create the RuntimeClass and deploy the pod using the following commands:
kubectl apply -f kata.yaml
kubectl apply -f kata-pod.yaml
These commands will create the RuntimeClass object for Kata Containers and deploy the pod, utilizing Kata Containers as the container runtime.
Please ensure that Kata Containers is properly installed and configured on your Kubernetes cluster for these configurations to work as expected.
The Benefits of Kubernetes’ RuntimeClass Feature
The RuntimeClass feature in Kubernetes provides significant flexibility and allows you to choose different container runtimes based on your specific needs and security policies. It gives you the ability to define and select the appropriate runtime for different workloads within your cluster.
Here are a few key benefits and use cases of using RuntimeClass.
Workload isolation. Different workloads may have varying security requirements. With RuntimeClass, you can choose the most suitable runtime that provides the desired level of isolation and security for each workload.
For example, you can use a more lightweight and efficient container runtime like gVisor or Kata Containers for security-sensitive workloads, while using a standard runtime like Docker or containerd for other workloads.
Custom runtimes. RuntimeClass enables you to integrate and use custom container runtimes within your Kubernetes environment. If you have developed or adopted a specific runtime tailored to your needs, you can define a RuntimeClass for it and leverage it for running specific workloads.
Performance optimization. Different container runtimes may offer varying levels of performance. By using RuntimeClass, you can select the most appropriate runtime for each workload to optimize performance. For example, you can choose a lightweight runtime like gVisor or Kata Containers for workloads that require better resource efficiency and faster startup times.
Compliance and security policies. Organizations often have specific security policies or compliance requirements that dictate the runtime to be used for certain workloads. RuntimeClass allows you to enforce these policies by configuring the appropriate runtime for workloads that need to adhere to specific security guidelines.
Dynamic runtime switching. RuntimeClass also supports dynamic switching of runtimes for running workloads. This flexibility enables you to switch between runtimes as needed, allowing you to adapt to changing workload requirements or respond to security incidents effectively.
Best Practices for Deploying Secure Runtimes
Understanding when and how to use secure container runtimes is essential for planning a secured Kubernetes environment. Here are some options and considerations for deploying secure runtimes based on your specific needs.
Use secure runtimes for every pod within the cluster. One approach is to use secure container runtimes, such as gVisor or Kata Containers, as the default runtime for all pods within your cluster. This ensures consistent and strong isolation for all workloads running in the cluster, regardless of their trust level.
By defaulting to a secure runtime, you provide an extra layer of protection for your entire environment.
Run untrusted or third-party applications within secure containers. Secure runtimes are particularly valuable when running untrusted or third-party applications. By deploying these applications within secure containers using runtimes like gVisor or Kata Containers, you can mitigate potential risks and isolate them from the underlying host system.
This approach helps protect the host and other workloads from potential vulnerabilities or malicious activities that may arise from untrusted code.
Deploy home-grown applications within the default runC runtime. If you have internally developed applications that are trusted and have undergone rigorous security reviews, you may choose to run them within the default runC runtime. This approach acknowledges that trusted applications may not require the additional isolation provided by secure runtimes.
However, it’s crucial to ensure that proper security practices, such as container hardening and vulnerability scanning, are implemented for these applications.
Consider your specific needs and environment. The decision to deploy secure container runtimes should be based on your specific needs, security requirements and risk assessments. Evaluate factors such as the sensitivity of data, regulatory compliance, threat landscape and the overall security posture of your environment.
Additionally, consider the performance overhead and resource use implications of using secure runtimes, as they may introduce some additional overhead compared to standard runtimes.