Kubernetes / Security / Sponsored

What Your Kubernetes Security Checklist Might Be Missing

5 Mar 2019 2:02pm, by

Nirmata sponsored this post.

New technologies often require changes in security practices. What is remarkable about containers and Kubernetes, is that they also provide the potential for enhancing and improve existing security practices.

In this post, I will share a model that we use at Nirmata to help customers understand security concerns and plan Kubernetes implementations that are secure.

Kubernetes Security Model

Jim Bugwadia
Jim has more than 20 years of experience building and leading effective teams and has created software that powers communications systems. Prior to co-founding and becoming CEO of Nirmata, Jim was among the original architects and business leaders within Cisco’s cloud automation practice, where he helped grow revenues to over $250M and IDC recognized the practice as #1 in global cloud services. Prior to his work at Cisco, Jim led engineering teams at startups including Pano Logic, a desktop virtualization startup; Trapeze Networks, a wireless pioneer; and Jetstream Communications, a telecom equipment manufacturer.

At a high-level, we can segment Kubernetes security concerns into three layers and two life-cycle phases:

Layers:

  • Applications: the entire point of Kubernetes is to manage workloads i.e. your applications. Securing this layer involves managing sensitive data that your application requires, as well as securing traffic flows and data within the application. Kubernetes itself provides several abstractions to help manage application security.
  • Clusters: a Kubernetes cluster consists of several control plane components, and components that run on worker nodes. A comprehensive security policy requires understanding how to secure Kubernetes and correctly configure Kubernetes components for each cluster.
  • Infrastructure: like any other software, Kubernetes components require compute, networking, and storage. For Kubernetes, this corresponds to the nodes (virtual or physical hosts) that Kubernetes is installed on. This layer must also be secured to ensure that Kubernetes components are correctly configured.

Phases:

  • Build: this phase involves the setup — before any workload is executed. For applications, this phase includes build process and CI/CD pipeline concerns. For clusters, and cluster add-on services, this includes the setup and configuration of Kubernetes. For infrastructure, this includes the host prep process.
  • Operate: this phase involves the ongoing operations and management of Kubernetes components, cluster add-on services, and workloads.

With this model, we list and describe available solutions that address the major security-related concerns.

The figure below summarizes these, followed by details on each item:

Image Scanning

Phase: Build; Layer: Application

Container images are typically built using build orchestration tools, like Jenkins. An image scanning tool needs to be part of the build process to scan each layer used in a container for vulnerabilities. Clair is an open source image scanner, and CNCF backed image registries like Harbor use Clair to automatically scan all images.

Image Provenance

Phase: Operate; Layer: Application

While image scanning ensures that images that you build are safe, image provenance ensures that images that you run are the ones that you scanned and approved! In other words, enterprises need a way to ensure that only scanned and approved images are run in their clusters. One way of doing that is provided a list of trusted image registries and using a cluster-wide policy management tool to ensure that images from non-trusted registries are not allowed.

Secrets Management

Phase: Operate; Layer: Application

 Secrets are sensitive data, like password and keys, required by your application. The best practice for managing secrets is to use “late-binding” and defer the loading of secrets from a secrets store to the application run-time — typically the initialization phase of the pod. Here is an example of how that can be achieved using the Hashicorp Vault and the open source Nirmata Vault Client (see blog and demo video).

Namespaces

Phase: Operate; Layer: Application;

Kubernetes Namespaces allow logical segmentation and isolation of resources, basically allowing one physical cluster to appear as several virtual clusters. Whenever possible, applications should be isolated to their own namespaces. This is important as several other Kubernetes features, such as RBAC, Resource Quotas, etc. can be applied at the namespace level. However, it is important to note that namespaces do not automatically provide network isolation — this requires configuration of Network Policies.

Network Policies

Phase: Operate; Layer: Application

A Kubernetes Network Policy is a like a firewall rule which allows fine-grained control of ingress and egress traffic to each application component i.e. a pod. Kubernetes network policies should be configured at a Namespace level, for defaults, and at a workload level for each component. Simply configuring Network Policies does nothing — a CNI that can enforce network policy rules, like Calico, is also needed.

Role-Based Access Controls

Phase: Build; Layer: Cluster

Kubernetes provides granular role-based access controls (RBAC) capabilities to manage access to resources. A Role defines a set of permission rules that specify which operations are allowed and on which entities. A RoleBinding applies a role to a use identity or service account. Both constructs, Role and RoleBinding, apply at a Namespace level. A ClusterRole and ClusterRoleBinding apply cluster-wide.

Note that Kubernetes does not provide any options to manage users — we will address this later in the Identity and Access Management section.

While Kubernetes provides rich access controls, these need to be configured and managed across clusters. For enterprise use cases, you will require a common way to manage RBAC across clusters and on any infrastructure.

Audit Policies and Logging

Phase: Build; Layer: Cluster

A Kubernetes AuditPolicy defines what events need to be recorded and control what data should be included in the audit records. The Audit Policy can be configured for different storage backends. Starting with Kubernetes 1.13, you can also configure AuditSink objects, which enable a dynamic backend that received events via a webhook API.

An Audit Policy and backends that record audit events must be configured for each Kubernetes cluster at the API Server level.

Certificate Management

Phase: Build; Layer: Cluster

Kubernetes components use X.509 certificates for authentication and encryption. All Kubernetes certificates must be signed by a Certificate Authority (CA), however, the CA itself can be self-signed. For an enterprise deployment, it is important to have a certificate management policy in place which ensures that Kubernetes certificates can be easily managed across clusters.

Pod Security Policies

Phase: Build; Layer: Cluster

Kubernetes Pod Security Policies manage rules for pod configuration and updates. Pod Security Policies are cluster-wide resources and need to be enabled by the PodSecurityPolicy admission controller. Simply creating a Pod Security Policy does nothing — each pods service account must be authorized to use it. Pod Security Policies can control running or privileged containers, using the host network namespace, use of the host file-system and several other important privileges.

Identity Management

Phase: Operate; Layer: Cluster

While Kubernetes RBAC provides granular control over access of entities, Kubernetes does not provide any construct to manage user identities. This makes sense, as the best practices are to manage user identities via a central Identity Provider (IdP) such as Active Directory or other directory services. In addition, it is important for enterprises to consider Single Sign-On (SSO) so that development and operations teams have a good user experience when managing multiple clusters across different infrastructure stacks or cloud providers.

Kubernetes Upgrades

Phase: Operate; Layer: Cluster

Kubernetes is a fast-moving project with minor feature releases every three months, and patches and security fixes released more often. This means that enterprises need to be prepared to upgrade Kubernetes components often — on production clusters. A managed Kubernetes service or a management tool that ensures safe and timely upgrades is required to operationalize Kubernetes.

CIS Benchmarks for Kubernetes

Phase: Operate; Layer: Cluster

The Center for Internet Security (CIS) publishes a list of over a hundred recommendations and best practices for securing Kubernetes clusters. For secure operations, it’s essential to be able to audit clusters against the CIS benchmarks. There are open source tools, like kube-bench from Aqua Security, that can help automate running the scans. However, for production deployments, you will still need additional to collect, report, and analyze results.

Minimal OS

Phase: Build; Layer: Infrastructure

Containers have been a game-changer across the entire infrastructure stack, including operating systems. CoreOS (acquired by Red Hat, which was then acquired by IBM) initially popularized the concept of a minimalistic operating system designed for only running containers and with features like atomic updates and clustering. However, every major OS vendor has quickly followed with stripped-down distributions for containers. Reducing the operating system results in a smaller attack surface, and hence a more secure deployment.

OS Hardening

Phase: Build; Layer: Infrastructure

Most operating systems are insecure by default, and require hardening to minimize exposure to threats and vulnerabilities. There are well-known procedures and standards for OS hardening, and these must be followed when building hosts that will run Kubernetes components.

CIS Benchmarks for Docker

Phase: Operate; Layer: Infrastructure

Kubernetes requires a container engine, like Docker CE or Containerd, to operate. Container engines must also be secured and hardened. As with securing Kubernetes clusters, the Center for Information Security (CIS) also publishes comprehensive benchmarks for securing container engines. These should be followed for building Kubernetes nodes.

Conclusion

Kubernetes is a complex system, and securing it requires thinking about several different layers of the stack as well as covering both build and configuration time concerns as well as run-time concerns. In this post, I presented a security model we use at Nirmata to guide our enterprise customer to guide them with enterprise-wide Kubernetes adoption.

Kubernetes provides a number of security constructs that can be leveraged to create a highly secure environment. However, what should be fairly obvious is that enterprise-wide Kubernetes security requires a management plane that is constantly validating, auditing, and ensuring configurations and compliance across clusters to ensure that Kubernetes is correctly configured and secured.

I am also assuming that Kubernetes clusters are being managed within a single enterprise. As container security expert Jessie Frazelle details in her blog post, hard multitenancy with Kubernetes is still an unsolved problem.

But what’s most exciting to me is that there are new innovations in the community, like the work being done on rootless Kubernetes that will make Kubernetes even more secure in the future. It’s a great time to build!

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.