Best Practices for Network Policies on the Amazon Elastic Kubernetes Service
With cloud services like Amazon Web Services, the customer shares responsibility with the service provider for managing and securing the hosted virtual infrastructure. Termed the shared responsibility model, this split of duties lays out which areas of configuration and oversight fall to the user and which reside with the provider. It is particularly important to understand that the security and compliance of their cloud workloads sits squarely in the customer’s area of responsibility, especially with services such as managed Kubernetes offerings like Amazon Elastic Kubernetes Service (EKS). With EKS, AWS takes responsibility for securing its infrastructure, patching Kubernetes, and addressing security issues in its software. Customers must ensure the security of their own applications and correctly use the available controls within their cloud infrastructure to protect their data and workloads in addition to their cloud account and its resources.
The security and compliance of their cloud workloads sits squarely in the customer’s area of responsibility.
By understanding the controls available for Kubernetes and Amazon EKS and where clusters need additional security steps to be taken, customers can make the implementation and maintenance of cluster security stronger and easier. Good EKS security starts with a strong design for your AWS account and your EC2 Virtual Private Cloud (VPC). Understanding the Kubernetes networking framework, and applying network protections to your cluster’s components and workloads provides another piece of the EKS security puzzle.
Below are best practices you can follow to set up Kubernetes networks effectively in Amazon EKS.
Enhance Cluster Network Controls Using Calico
The default network configuration in Kubernetes clusters allows network traffic to move freely between pods and to leave the cluster network. These same settings are standard in Amazon EKS clusters – pods can connect to all other pods. By creating limitations and restrictions that allow only required cluster egress and service-to-service connections, you can reduce potential threats and limit the ability of bad actors or misconfigured workloads to exploit cluster resources.
By installing Calico, an open source CNI (Container Network Interface), which implements the standard Kubernetes Network Policy API, you can create network policies to restrict pod traffic to required connections only. Calico also offers various custom extensions to standard Kubernetes policy types to provide finer controls for ingress and egress traffic. When applying network policies, users should test created policies to ensure they block any traffic deemed unnecessary while still allowing required traffic.
Note that Calico and most CNI providers do not currently support Windows containers. If you need network traffic control for Windows workloads, you may need to create a dedicated VPC subnet and use VPC network access-control lists (ACLs) to limit node-to-node traffic. However, limiting pod-to-pod traffic effectively is only possible with a cluster-aware solution.
Close Kubernetes API Endpoints to Network Access
Amazon EKS leaves the endpoint of the Kubernetes API server, the interface to the cluster’s control plane, exposed to the Internet by default in new clusters. The API server in EKS clusters runs with the
--anonymous-auth=true flag to permit unauthenticated connections, and EKS does not provide the option for the user to disable the setting. The cluster API endpoint must be protected by limiting network access to the API service to trusted IP addresses only. You have several options in Amazon EKS for protecting a cluster’s API endpoint that can be used in combination, including:
Disabling the public API endpoint and using a private endpoint in the cluster’s VPC instead, which gives ultimate protection. Whitelisting CIDR blocks to restrict the IP addresses that can connect to the public endpoint. Using network policies that block traffic from pods in the cluster to the Kubernetes API endpoint, allowing connections only from those workloads that require access.
Enable the Kubernetes API Private Endpoint
As stated, Amazon EKS clusters have an endpoint available to the public Internet by default. As a byproduct of this configuration, traffic between the API server and the nodes in the cluster’s VPC leaves the customer’s VPC private network. By enabling a private endpoint within a cluster, you can keep network traffic inside the VPC. Amazon EKS supports having both public and private endpoints in the same cluster, so even if you need the public cluster API endpoint, you can still use a private endpoint to keep cluster traffic in your private network space.
Block Kubelet Access
The kubelet service, which runs on every node to manage the lifecycle of Kubernetes pods and containers, requires strong protection because of its integration with the node’s container runtime. Amazon EKS runs kubelet with anonymous authentication disabled and requires authorization from the TokenReview API on the cluster API server, both of which provide a degree of security. However, it is a best practice to adopt additional safeguards by blocking access to the kubelet network port from the pod network. To do so, create a GlobalNetworkPolicy after installing the Calico CNI that prevents all pods from connecting to the kubelet.
Protect Service Load Balancers
By default, the creation of a Kubernetes Service of type LoadBalancer in an Amazon EKS cluster generates an Internet-facing ELB with no other firewall restrictions other than those of the load balancer subnet’s network access-control list (ACL). When only sources inside the cluster’s VPC require access to a service’s endpoint, the following resource annotation should be added to the Service manifest to instruct the cluster’s cloud controller to create an internal load balancer:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0. In situations where the load balancer must be Internet-facing, but at the same time remain closed to some IP addresses, users should add the field loadBalancerSourceRanges and a whitelist of approved source addresses to the service object specification.
Beyond operationalizing Amazon EKS network policies for optimal security, you should consider several other best practices for running workloads on Amazon EKS to further protect the cluster and all its workloads. Overly privileged pods, container images with known vulnerabilities or exploitable tools, and misconfigured or excessively open Kubernetes RBAC all constitute additional sources of risks for Amazon EKS users. Additional technical guidance to help users lock down Amazon EKS workloads is available here.
In general, Amazon EKS leaves a lot of the security responsibility to the user, particularly when applying updates and upgrading Kubernetes versions. This responsibility is a blessing and a curse — developers have a lot of flexibility given the spectrum of needs Amazon EKS can help address, but users with multiple clusters will likely need additional automation to lighten the administrative load and to apply critical security patches quickly. Additional monitoring capabilities will also increase visibility into cluster health and assist with the detection and prevention of unauthorized activity and other security incidents. Prioritizing risk management and performing regular audits of the policies and tasks outlined here are foundational to gaining the operational benefits of Amazon EKS while minimizing risk.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: email@example.com.
Amazon Web Services is a sponsor of The New Stack.
Feature Image: Shield of Henry II of France, 1555, New York Metropolitan Museum of Art.