The dynamic nature of cloud native platforms and the simplicity of deployment that containers bring aren’t always an advantage if they let developers create systems that aren’t secure or break company policy. And while what you deploy with a containerized application is the same every time, it doesn’t always stay the same if someone ends up adding extra tools or permissions to a cluster to fix a problem in production. Those manual interventions don’t scale, and neither does having policy be something your devops team has to implement by hand.
Whether policy is about meeting security, governance and compliance rules or just codifying what you’ve learned from past incidents and mistakes to make sure they don’t get repeated, it has to be applied automatically rather than manually to keep up with the speed and scale of cloud native technologies.
The admission controller webhooks introduced in beta in Kubernetes 1.9 that let Istio inject Envoy sidecars and allow automated provisioning of persistent volumes are also an excellent way of applying policy without recompiling the Kubernetes API server, whether that’s validating an image repository before deploying an object or enforcing unique ingress hostnames. If one team is using a specific ingress hostname, you can block other teams from using that so there aren’t conflicts.
Admission webhooks also enable whitelisting or blacklisting container registries, so you could restrict developers to using a corporate repository.
These webhooks will be executed whenever a resource is created, updated or deleted; they intercept requests to the Kubernetes controller after the request has been authenticated and authorized but before the object requested is persisted to etcd. They can be validating, mutating, or both.
Validating admission webhooks intercept requests and reject any that don’t comply with policy; they don’t make any changes to objects so they can run in parallel. That lets you restrict resource creation to match policy, like setting a team limit on the number of replicas a service can run with, blocking deployment of code that’s tagged as not ready for production or ensuring that all resources are labelled.
Mutating admission webhooks can’t run in parallel because they can make changes to objects by sending requests to the webhook server (which can be an HTTP server running in the cluster or a serverless function elsewhere); for example, adding the Envoy proxy as a sidecar mutates the object that’s deployed. Instead of simply rejecting requests, mutating admission webhooks can change the requests so they comply with policy and are allowed to complete; for example, adding required tags and labels to objects so they’re easy to audit by project or team, or changing the load balancer requested so it’s an internal load balancer.
Creating admissions webhooks can be complex, and one option is to use Open Policy Agent, an CNCF-hosted sandbox project (which means it’s experimental and not necessarily ready for production). This is a general-purpose policy engine that validates JSON against policies, so you can use the same tool to apply policy to, say, Kubernetes, Terraform, access to REST APIs and remote connections over SSH.
Policies are written as rules or queries in Rego, OPA’s declarative policy language, with deny rules specifying policy violations. The inputs and outputs are both JSON so you can update the policies without recompiling (and the JSON output can be the modified request that meets policy).
The new open source Kubernetes Policy Controller project from Microsoft’s Azure containers team is a validating and mutating admissions webhook that uses OPA; introducing the project at Kubecon this week Microsoft open source architect Dave Strebel said the Kubernetes Policy Controller would be moving into the OPA project soon.
The Kubernetes Policy Controller also extends the standard Kubernetes role-based access control (RBAC) authorization module to add a blacklist in front of it. All authorized requests can be blocked, including blocking the execution of kubectl commands on a pod. Or it can be used for auditing, to see if any policies are being violated on a specific cluster.
There are some sample policies in the repo already, including validating that ingress hostnames are unique across all namespaces and restricting all create, update or delete requests to resources to a named set of users. The project will also host sample policies contributed by the community, to give devops teams a library of policies to use; there’s a Slack channel for collaborating on policies.
The advantage of OPA and the Kubernetes Policy Controller is that you can decouple policy from applications, Strebel pointed out; policy can be written once and applied to multiple applications across the stack.
Using policies can add a little latency although for most applications he suggested it would be negligible. The deployment for the Kubernetes Policy Controller is three containers with policy running in memory; that adds a little overhead but makes it suitable for applications that are very sensitive to applications.
Rego will a new language for many developers and because the impact of applying policy can mean that requested objects and resources aren’t available, it’s important to get the policy rules right. It’s also important to be careful when mutating objects, Strebel noted; because the object gets changed, it isn’t what the developer expected to get back and that can cause unexpected behavior or different outcomes. But these are relatively minor drawbacks compared to the advantage of automatically enforcing policy on Kubernetes clusters and being able to audit that it’s being enforced.
Expect these tools to develop quickly, because policy is going to become increasingly important as Kubernetes deployments are used for more enterprise applications where compliance and governance is critical.
The Cloud Native Computing Foundation, and KubeCon+CloudNativeCon are sponsors of The New Stack.
Feature image: CNCF Co-chair Liz Rice, at KubeCon 2018, demonstrates how admission controller webhooks could block malicious YAML code.