Kubernetes / Security / Sponsored / Contributed

Open Policy Agent: The Top 5 Kubernetes Admission Control Policies

22 Dec 2020 7:00am, by

Styra sponsored this post.

Torin Sandall
Torin is a co-founder of the Open Policy Agent (OPA) project. Torin has spent over 10 years as a software engineer working on large-scale distributed systems projects. Torin is a frequent speaker at events like KubeCon, DockerCon, Velocity, and more. Prior to working on OPA, Torin was a Senior Software Engineer at Cyan (acquired by Ciena) where he designed and developed core components of their SDN/NFV platform.

Kubernetes developers and platform engineers are typically under a metric ton of pressure to keep app deployments humming at a brisk pace. With the scale and power of Kubernetes, this can feel daunting. Maybe you’re a retailer launching a new e-commerce feature for a huge sale. Maybe you’re a bank that’s scaling a finance app worldwide. In either case, compromises always get made in the interest of speed and schedules. Platform teams are increasingly held responsible for ensuring that those compromises — such as managing Ingress, for instance — don’t result in consequences like customer data being exposed to the entire internet.

Without the right policies in place, the extensive power of Kubernetes can result in consequences that are as grand as the designs. Fortunately, Kubernetes provides the ability to set policies that can limit those consequences, by checking for — and preventing — deployment mistakes from ever making it into production. To ensure that your teams’ apps aren’t more consequence than confidence, here are the top five Kubernetes admission control policies that you should have running in your cluster right now.

Note: Each of the sample policies below can be implemented via Open Policy Agent (OPA), the de facto policy engine for cloud native (open source). Even simpler, all these OPA policies can be implemented across clusters in literally minutes with Styra Declarative Authorization Service (DAS) Free.

1. Trusted Repo

This policy is simple, but powerful: only allow container images that are pulled from trusted repositories and, optionally, pull only those that match a list of approved repo image paths.

Of course, pulling unknown images from the internet (or anywhere besides trusted repos) comes with risks — such as malware. But there are other good reasons to maintain a single source of truth, such as enabling supportability in the enterprise. By ensuring that images only come from trusted repos, you can closely control your image inventory, mitigate the risks of software entropy and sprawl, and increase the overall security of your cluster.

Related policies: 

  • Prohibit all images with the “latest” image tag
  • Only allow signed images, or images that match a specific hash/SHA

Sample policy: 

2. Label Safety

This policy requires all Kubernetes resources to include a specified label and do so with the appropriate format. Since labels determine the groupings of Kubernetes objects and policies, including where workloads can run — front end, back end, data tier — and which resources can send traffic, getting labeling wrong leads to untold deployment and supportability issues in production. Moreover, without access controls over how labels are applied, you lack fundamental security over your clusters. Finally, the danger with manual label entry is that errors creep in, especially because labels are both extremely flexible and extremely powerful in Kubernetes. Apply this policy and ensure that your labels are configured correctly and consistently.

Related policies: 

  • Ensure that every workload requires specific annotations
  • Specify taints and tolerations to restricting where images can be deployed

Sample policy:

3. Prohibit (or Specify) Privileged Mode

This policy ensures that, by default, containers cannot run in privileged mode — unless you carve out specific circumstances (typically rare) when it is allowed.

Generally, of course, you want to avoid running containers in privileged mode, because it provides access to the host’s resources and kernel capabilities — including the ability to disable host-level protections. While containers are isolated to some extent, they ultimately share the same kernel. This means that if a privileged container is compromised, it can become a jumping-off point to compromise an entire system. Still, there are legitimate reasons to run in privileged mode — just ensure that these times are the exception, not the rule.

Related policies:

  • Prohibit insecure capabilities
  • Prohibit containers from running as root (run as non-root)
  • Set userID

Sample policy:

4. Define and Control Ingress 

Ingress policy allows you to expose specific services (allow Ingress) as needed, or alternatively expose no services as needed. In Kubernetes, it is all too easy to accidentally spin up a service that talks to the public internet (there are many examples of this on Kubernetes Failure Stories). At the same time, overly permissive Ingresses can cause you to spin up unnecessary external LoadBalancers, which can also become very expensive (as in monthly budget spend) very fast! Furthermore, when two services try to share the same Ingress, it can just plain break your application.

The policy example below prevents Ingress objects in different namespaces from sharing the same hostname. This common issue means that new workloads “steal” internet traffic from existing workloads, which has a range of negative consequences — ranging from service outage, to data exposure, and far more.

Related policies:

  • Require TLS
  • Prohibit/Allow specific ports

Sample policy:

5. Define and Control Egress 

Every app needs guardrails to control how egress traffic can flow, and this policy lets you specify communications both intra-and extra cluster communication. As with Ingress, it is easy to accidentally “allow Egress” to every IP in the entire world by default. Sometimes that’s not even an accident — blanket allow can often be a last-ditch effort to make sure that a newly deployed app can be accessed, even if it is too permissive or introduces risk. There is also the potential, at an intra-cluster level, of unintentionally sending data to services that shouldn’t have it. Both of these situations carry the risk of data exfiltration and theft, if your services are ever compromised. Being overly restrictive with Egress, on the other hand, can sometimes cause misconfigurations that break your application. Achieving the best of both worlds means using this policy to be selective and specific about when Egress is allowed to happen, and to which services.

Related policies. 

  • See Ingress policies above

Sample policy: 

With these policies in place, you can focus on building a world-class platform — one that prevents your app devs from accidentally bringing the whole thing down, exposing data to would-be thieves, or generally invoking the specter of manual remediation for you and your team. And of course, if you want to add more essential policies for Kubernetes, check out openpolicyagent.org or explore the library of plug-and-play policies that comes with the free tier of Styra DAS.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Velocity.

A newsletter digest of the week’s most important stories & analyses.