6 Kubernetes Security Best Practices

Sure, Kubernetes gives us a good set of core software security principles to work with, but we still have to understand them and implement them. With a distributed deployment such as a Kubernetes cluster, the number of attack vectors increases, and it is important to know the best practices for limiting those attack surfaces as much as possible.
Even when using a managed Kubernetes service, some ownership of security still falls to us end users. The cloud vendor is typically responsible for managing and securing the control plane of the Kubernetes cluster (API Server, scheduler, etcd, controllers) and customers are responsible for securing the data plane (node pools, ingress, networking, service mesh etc.)
I started working on Kubernetes about four years ago with minikube local Kubernetes and the Linux Vagrant project, and am now more familiar with newer cloud services. Based on that experience, here are six Kubernetes security best practices that should be helpful, whether you’re using open source Kubernetes or using a managed Kubernetes service from the likes of Oracle, Azure, AWS or another cloud provider.
1. Use Role-Based Access Control (RBAC)

Role-based access control (RBAC) lets the customer control who can access the Kubernetes API and what permissions they have. RBAC is typically enabled by default in Kubernetes. However, if you upgraded from a very old Kubernetes release and had not enabled it earlier, RBAC settings should be checked to make sure they are enabled.
Another thing to keep in mind is that simply enabling RBAC is not enough. You should also manage the authorization policies and use them properly. Use RBAC to limit users and groups to just the actions and tasks they may need. Always follow the principle of least privilege to ensure that users and Kubernetes service accounts have the minimal set of privileges required. Make sure to not give clusterwide permissions, and do not give anyone cluster admin privileges unless absolutely necessary. Refer to the official Kubernetes RBAC documentation for more information.
For operations on Kubernetes clusters created and managed using a cloud service, the vendor might offer an identity and access management service. The documentation here provides more details. Multifactor authentication (MFA) is another option to enhance the security of authenticating to the Kubernetes API, if you need more than one factor to verify the identity.
2. Secrets Should Be Secrets
Secrets contain sensitive data such as a password, a token or an SSH key. Kubernetes secrets help securely initialize pods with artifacts like keys, passwords, tokens, etc. When a pod starts up, it will generally need to access its secrets. Whenever a service account is created, a Kubernetes secret storing its authorization token is automatically generated. Kubernetes supports encryption at rest. This will encrypt secret resources in etcd, preventing access to your etcd backups and viewing the content of those secrets.
Encryption offers an additional level of defense when backups are not encrypted or an attacker gains read access to etcd. Ensure that the communication between users and the API server and from the API server to the kubelets is protected using SSL/TLS, as explained here. A recommended practice is to have a short lifetime for a secret or credential to make it harder for an attacker to use them. Setting short lifetimes on certificates and automating their rotation is a good practice.
Another thing to keep in mind is being aware of third-party integrations that request access to secrets of your Kubernetes cluster. In such cases, carefully review the RBAC permissions and access being requested or you may compromise the security profile of your cluster. If you are using Oracle Kubernetes Engine, refer to Encrypting Kubernetes Secrets at Rest in Etcd for more information.
3. Private Kubernetes API Endpoint
Kubernetes cluster administrators and operators can configure the Kubernetes API endpoint of a cluster as part of a private or public subnet. In a private cluster, the API server (endpoint) inside the control plane has a private IP address that makes the master inaccessible from the public internet. In addition to private worker nodes, you should make sure to configure the Kubernetes API endpoint as a private endpoint. This is important if you need to create fully private clusters that don’t use or expose any public IPs and allow no ingress/egress of traffic from/to the public internet. The network access to the cluster API endpoint can be controlled using security access control lists, or at a granular level using network security settings. For example, Oracle’s Kubernetes Engine gives you the option of configuring both the Kubernetes API endpoint and worker nodes.
4. Secure Nodes and Pods
Nodes: A Kubernetes node is a worker node that can be a VM or physical machine that typically runs on the Linux operating system (OS). The services running on a node include the container runtime, kubelet and kube-proxy. Hardening and securing the OS running on the nodes is important; this is the responsibility of the cloud provider and the Kubernetes administrator.
For example, Oracle Kubernetes Engine nodes come with a hardened Linux image. Security patches should be regularly applied on the Linux image that runs on those nodes by the Kubernetes administrator or by using the automatic upgrade capability of the service provider once they have been provisioned by a customer. Using the Center for Internet Security (CIS) Kubernetes benchmark for nodes is another good practice.
In addition to OS security, it is recommended that nodes be on a private network and not accessible from the internet. A gateway may be configured for access to other services outside the cluster network, if needed. Network ports access on nodes should be controlled via network access lists. It is also recommended to limit Secure Shell (SSH) access to the nodes. The Oracle Kubernetes Engine node pool security documentation provides some more guidance.
Pods: A pod is a group of one or more containers that run on nodes and can use shared or dedicated storage. By default, there are no restrictions on which nodes may run a pod. Use network policies to define rules of communication for pods within a cluster. Network policies are implemented by the network plugin and using them may require a network driver that supports policies. Oracle Kubernetes Engine, for example, offers multiple options to secure communication to and from the workloads in your cluster.
For the best network security posture, evaluate using a combination of network policies to secure pod-level network communication and security lists to secure host-level network communication. Kubernetes pod security context helps define the privilege and access-control settings for a pod or a container. Check and leverage the security context settings that pods and the container manifest are using. Pod security policies allow a customer to control runtime execution properties of the pods such as ability to run containers as privileged containers, use of the host file system, network and ports. By default, a pod may be scheduled on any node in the cluster. Kubernetes offers multiple ways to control pod assignment to nodes, such as policies for controlling placement of pods onto nodes and taint-based pod placement and eviction. If using Oracle Kubernetes Engine, you can set up pod security policies for the cluster as explained in the documentation.
5. Eliminate Container Security Risks
Applications are packaged as container images, commonly Docker images. Container images are stored and pulled from a container registry and instantiated as runtime containers inside pods. Security must be a design principle right at the beginning of the development process, when you are working on the source code and libraries to build container images for your applications.
Implement security practices in your CI/CD tool chain and during the entire build, store and deploy process of container images. These include securely storing the container images, scanning those images for security vulnerabilities and managing the runtime security of the containers. As part of your DevSecOps cycle, it is a good idea to automate vulnerability scanning of third-party libraries you may be using to build applications. If you are using Oracle Kubernetes Engine, for example, you can also look at partner solutions like NeuVector, Deepfence, Aqua Security and Prisma Cloud Security. You can also find native container image scanning, signing and verification capabilities as part of the platform.
When building Docker images and containers, use hardened slim OS images and ensure that the users running the application have the least level of OS privileges necessary to run the processes inside the container. Another important thing to remember is to regularly apply security updates on the source image, then redeploy them as updated containers. It is also important to use private Docker registries like Oracle Cloud Infrastructure Registry with proper access-control and policies in place plus governance for the management of container images. Signing container images and maintaining a system of trust for the content of containers is suggested.
6. Auditing, Logging and Monitoring Are Essential
Auditing, logging and monitoring are important security aspects that can help improve the security posture of your cluster and should not be overlooked. Kubernetes audit logs are detailed descriptions of each call made to the Kubernetes API server. These audit logs provide useful information about what is happening in a cluster and can even be used for auditing, compliance and security analysis. Kubernetes audit records include security records that capture the complete sequence of activities and can help detect anomalous behavior and access to sensitive resources.
It is recommended to enable audit logging and save the audit logs on a secure repository for analysis in the event of a compromise. Kubernetes also provides cluster-based logging to record container activity into a central logging subsystem. The standard output and standard error output of each container in a Kubernetes cluster can be ingested using an agent like Fluentd running on each node into tools like Elasticsearch and viewed with Kibana. And finally, monitor containers, pods, applications, services and other components of your cluster using tools such as Prometheus, Grafana or Jaeger for monitoring, visibility and tracing the cluster.
A good resource for learning more about this topic is the O’Reilly’s “Kubernetes Security” book by Liz Rice and Michael Hausenblas. If using Oracle Kubernetes Engine, as I do, you can review the OCI Security Guide and some additional recommendations for securing Oracle Kubernetes Engine. As noted above, I also take advantage of native identity and authentication functionality in Oracle Cloud Infrastructure.
Regardless of where you deploy, I hope this post helps you get a clearer understanding of your role and your options for securing Kubernetes.