4 Best Practice Steps for Kubernetes Policy Enforcement

As organizations’ adoption of Kubernetes matures, there is an increasing need for control. Whether compliance or security is driving the requirement, control translates into creating and enforcing policies. Due to the dynamic and ephemeral nature of Kubernetes and containers, this can be a huge challenge. First, how do you create the right policies? And second, how do you enforce them?
Inconsistencies across environments with multiple Kubernetes clusters and multiple users can cause security incidents and downtime — and create new breach compliance requirements. We all know that to be compliant with any industry regulations and organizational guidelines, we cannot simply write the policy and trust individuals to follow it. Policy enforcement is already the norm for security and network access. It must become the norm for Kubernetes as well.
Consistency Challenges with Multiuser, -Cluster and -Tenant Kubernetes Environments
Managing cluster configurations becomes unwieldy fast as multiple workloads are inconsistently or manually deployed and modified. Without guardrails, there are likely to be discrepancies in configurations across containers and clusters which can be challenging to identify, correct and keep consistent. Yet, manually identifying these misconfigurations is highly error-prone and can quickly overwhelm ops teams with code review. DevOps teams then burn out responding to pages and putting out fires, with little time left over to make material improvements to the infrastructure or perform routine upgrades. Organizations waste time and money responding to issues caused by avoidable misconfigurations.
This is where enforcing policy patterns becomes essential to Kubernetes success. Policy enforcement provides a consistent set of standards so that engineering teams avoid creating security vulnerabilities, overconsuming compute resources or introducing noisy workloads. Here are four best practices for Kubernetes policy enforcement.
1. Understand Your Kubernetes Strategy
You may already have a set of policies at your organization. Some may apply to Kubernetes, some may not. You’ll want to start by asking questions about security, configuration and workloads.
For security, you’ll want to be able to answer:
- Who has access to clusters?
- What actions can users take within clusters?
- What level of permissions do workloads have within clusters?
- What are the network policies between workloads within your clusters?
For configuration, you’ll want to be able to answer:
- Where are Kubernetes resources defined (e.g. in an infrastructure-as-code repo)?
- What changes happened and when?
- What is your code review process for changes to resources?
- What type of resources can be deployed in your clusters?
- Which namespaces are usable by which users?
- Which namespaces are workloads deployed to?
- How do you set the amount of resources available to a workload or namespace?
- What are your common standards/defaults across workloads?
For deployment, you’ll want to be able to answer:
- Who can deploy workloads and services to your clusters?
- How can workloads and services be deployed to your clusters (e.g. via CI, kubectl, or Helm)?
- What is the promotion path between environments?
- Who is responsible for what aspects of your environment?
By answering these questions, you can move onto the next step of creating policies to make Kubernetes more secure, reliable and efficient.
2. Create Kubernetes Policies
In Kubernetes, policies are best defined in code. Policy-as-code benefits from version control, auditing, testing and repeatability. You can use these policies in audit mode to monitor existing resources for misconfiguration, or enforcement mode to prevent new misconfigurations from entering into the cluster.
Policies fall into three categories:
- Standard policies: enable best practices across all organizations, teams and clusters. Examples include disallowing resources in the default namespace, requiring resource limits to be set, or preventing workloads from running as root.
- Organization-specific policies: enforce best practices that are specific to your organization. Examples include requiring particular labels on each workload, enforcing a list of allowed image registries, or policies that help with compliance and auditing requirements.
- Environment-specific policies: enforce or relax policies for particular clusters or namespaces. Examples include stricter security enforcement in prod clusters, or more permissive policies in namespaces that run core infrastructure.
But simply creating these policies isn’t enough. We’ve all worked in situations where a policy handbook sits on a shelf and gathers dust. When security, compliance and cost overruns hang in the balance, enforcement is necessary.
3. Enforce Those Policies
Simply putting in place a best practices document for your engineering team doesn’t work — it will likely be forgotten or ignored. Kubernetes policy enforcement helps prevent common misconfigurations from being deployed into the cluster, enables IT compliance and governance, and allows teams to ship with confidence knowing that guardrails are in place.
There are three options you can take when approaching Kubernetes policy enforcement.
The first is to develop your own internal tools. Of course engineers like to develop their own tools for a problem; however, leaders need to decide whether their team can spend the time, money and resources developing and maintaining home-grown tooling, rather than working on problems that are specific to their business.
The next option is to deploy open source. There are a number of different open source tools that can help with security, efficiency and reliability configuration. There are open source auditing tools, like Trivy (for container scanning) and kube-hunter (for network sniffing), as well as Fairwinds’ own contributions, like Polaris (for configuration validation) and Goldilocks (for auditing resource requests and limits). Open Policy Agent (OPA) is an open source, general-purpose policy engine.
If you select the open source route, your team will spend time deploying and managing each tool. You’ll need to ask whether your team has the bandwidth for this and if it will enable you to focus on the apps or services that make you money.
A third option is to select a policy-driven configuration validation platform. These platforms combine a suite of open source tools behind a single pane of glass. Policies can be federated out to each of your environments — to your CI pipeline as it vets infrastructure-as-code, to your clusters’ admission controllers, and to scan every cluster in your fleet. You’ll be able to see when clusters are compliant, as well as when changes might take you out of compliance. For Kubernetes managers, a policy-driven configuration validation platform enables you to automate policy enforcement from your CI/CD pipeline into production, and to maintain a secure, efficient and reliable environment.
4. Use Policy Enforcement to Gain Multicluster Visibility
Without visibility, ops teams cannot pinpoint errors that lead to security and compliance events, downtime and overspending on compute resources. If you implement a Kubernetes policy enforcement platform, you can also use it to gain visibility into what’s really happening in your clusters.
Policy dashboards help to bridge the divide between dev and ops teams by providing shared visibility across clusters, so they can anticipate and remediate issues before they cost time or money. Ops teams can use a platform to inspect and apply Kubernetes best practices uniformly. This visibility is a huge benefit to anyone struggling to manage Kubernetes.
By following these four steps, engineering leaders can gain peace of mind knowing that Kubernetes is done right across multiple teams and clusters.
Feature image via Pixabay.