Containers are transforming software development. As the new foundation for CI/CD, containers give you a fast, flexible way to deploy apps, APIs, and microservices with the scalability and performance digital success depends on. But containers and container orchestration tools such as Kubernetes are also popular targets for hackers — and if they’re not protected effectively, they can put your whole environment at risk. In this article, we’ll talk about security best practices for every layer of the container stack.
It’s important to understand the security implications of containers. As an application-layer construct relying on a shared kernel, a container can boot up much faster than a full VM. On the other hand, containers can also be configured much more flexibly than a VM, and can do everything from mounting volumes and directories to disabling security features. In a “container breakout” scenario, when container isolation mechanisms have been bypassed and additional privileges have been obtained on the host, the container can even run as root under the control of a hacker — and then you’re in real trouble.
Here are a few things you can do to keep the bad guys out of your containers.
Layer 0 – The Kernel
Kubernetes is an open source platform built to automate the deployment, scaling, and orchestration of containers, and configuring it properly can help you strengthen security. At the kernel level, you can:
- Review allowed system calls and remove any that are unnecessary or unwanted
- Use container sandboxing like gVisor or Kata Containers to further restrict system calls
- Verify your kernel versions are patched and contain no existing vulnerabilities
Layer 1 – The Container
Container security at rest focuses on the Docker image you’ll use to build your running container. First, reduce the container’s attack surface by removing unnecessary components, packages, and network utilities — the more stripped-down, the better. Consider using distroless images containing only your application and its runtime dependencies.
Next, make sure to pull your images only from known-good sources, and scan them for vulnerabilities and misconfigurations. Check their integrity throughout your CI/CD pipeline and build process, and verify and approve them before running to make sure hackers haven’t installed any backdoors.
Once your image is packed up, it’s time for debugging. Ephemeral containers will let you debug running containers interactively, including distro less or other lightweight images that don’t have their own debugging utilities. Watch for anomalies and suspicious system-level events that might be indicators of compromise, such as an unexpected child process being spawned, a shell running inside a container, or a sensitive file being read unexpectedly. The Cloud Native Computing Foundation‘s Falco Project open source runtime security tool, and the many Falco rules files that have been created, are hugely useful for this.
Layer 2 – The Workload (Pod)
A pod, the unit of deployment inside Kubernetes, is a collection of containers that can share common security definitions and security-sensitive configurations. Pod Security Context specifies the privilege and access control settings for a given pod, such as:
- Privileged containers inside the pod
- Group and User IDs for processes and volumes
- Granular Linux capabilities (drop or add) like Sys.time
- Sandboxing and Mandatory Access Controls (seccomp, AppArmor, SELinux)
- Filesystem permissions
- Privilege escalation program privileges
To strengthen basic defense at the pod level, you can implement a strict Pod Security Policy to prevent dangerous workloads from running in the cluster. For more flexibility and granular control over pod security, consider implementing an Open Policy Agent (OPA), using the OPA Gatekeeper project.
Layer 3 – Networking
By default, all pods can talk to all the other pods in a cluster without restriction, which makes things very interesting from an attacker’s perspective. If a workload is compromised, the attacker will likely try to probe the network and see what else they might be able to access. The Kubernetes API is also available to access from inside the pod, offering another rich target. And if you see traffic originating from a container in a cluster reaching out to a foreign IP that hasn’t been touched before, it’s not a good sign.
Strict network controls are a critical part of container hardening — pod to pod, cluster to cluster, outside-in, and inside-out. Use built-in Network Policies to isolate workload communication and build granular rulesets. Consider implementing a service mesh to control traffic between workloads as well as ingress/egress, such as by defining namespace-to-namespace traffic.
Application Layer (L7) Attacks – Server-Side Request Forgery (SSRF)
We’ve been hearing a lot about SSRF attacks lately, and no wonder. With cloud native environments where APIs talk to other APIs, SSRF can be especially hard to stop; customer-supplied webhooks are especially notorious. Once a target has been found, SSRF can be used to escalate privileges and scan the local Kubernetes network and components; hit the cloud metadata endpoint; dump out the data on the Kubernetes metrics endpoint to learn valuable information about the environment — and potentially make it possible to take it over completely.
Application Layer (L7) Attacks – Remote Code Execution (RCE)
RCE is also extremely dangerous in cloud native environments, making it possible to run system-level commands inside a container to grab files, access the Kubernetes API, run image manipulation tools, and compromise the entire machine.
Application Layer (L7) Defenses
The first rule of protection is to adhere to secure coding and architecture practices — that can mitigate the majority of your risk. Beyond that, you can layer on network defenses along both axes: north-south, to monitor and block malicious external traffic to your applications and APIs; and east-west, to monitor traffic from container to container, cluster to cluster, and cloud to cloud to make sure you’re not being victimized by a compromised pod.
Layer 4 – Nodes
Node-level security isn’t quite as exciting as networking, but it’s just as important. To prevent container breakout on a VM or other node, limit external administrative access to nodes as well as the control plane, and watch out for open ports and services. Keep your base operating systems minimal, and harden them using CIS benchmarks. Finally, make sure to scan and patch your nodes just like any other VM.
Layer 5 – Cluster Components
There are all kinds of things going on in a Kubernetes cluster, and there’s no all-in-one tools or strategy to secure it. At a high level, you should focus on:
- API Server – check your mechanisms for access control and authentication, and perform additional security checks of your dynamic webhooks, Pod Security Policy, and public network access to the Kubernetes API;
- Access control – use role-based access control (RBAC) to enforce the principle of least privilege for your API server and Kubernetes secrets
- Service account tokens – to prevent unauthorized access, limit permissions to service accounts as well as to any secrets where service account tokens are stored
- Audit logging – make sure this is enabled
- Third-party components – be careful about what you’re bringing into your cluster so you know what’s running there and why
- Kubernetes versions – Kubernetes can have vulnerabilities just like any other system, and has to be updated and patched promptly
- Kubelet misconfiguration – responsible for container orchestration and interactions with container runtime, Kubelets can be abused and attacked in an attempt to elevate privileges
Kubernetes security can seem daunting, but by working through best practices for each layer of your stack, you can bring your containers to the same high level of protection as the rest of your environment — so you can enjoy the benefits of fast, agile development without putting your environment or your business at risk.
For an in-depth discussion of Kubernetes security, view this webinar.