The Amazon Elastic Container Service for Kubernetes (Amazon EKS,) which was launched at AWS re:Invent in December last year, is going to become generally available soon. Amazon took about six months to ensure that the managed Kubernetes service is ready to deal with production deployments. Since the announcement, the Amazon EKS team has been busy integrating some of the core features of AWS with Kubernetes.
AWS is already home for a sizeable number of Kubernetes clusters running production workloads. Kops, the open source deployment is a big hit with the community. It is used by many customers to rapidly provision a multi-node Kubernetes cluster in Amazon EC2. Kops does a pretty decent job in applying deployment best practices to clusters running in production. The expectation from Amazon EKS is that it beats the experience and performance of clusters deployed through Kops.
One of the reasons why EKS is taking a longer time to become generally available is the integration with existing building blocks of AWS. From VPC networking to IAM, Amazon has carefully integrated the core services without breaking the expected behavior. Customers can also take advantage of standard monitoring and logging tools such as CloudWatch and CloudTrail for logging and monitoring EKS workloads. Amazon EKS passed the Cloud Native Computing Foundation conformance test to become a certified hosted platform, which means that all the plugins and extensions that work with upstream Kubernetes will work as is in EKS.
The Provisioning Experience
Amazon EKS is slightly different from other managed services such as RDS and EMR. Customers have additional control and better visibility into the service when compared to their services.
There are three high-level steps involved before you run your first containerized workload in EKS:
- Provision the masters — This step involves choosing the region, the subnets of a VPC, the ARN of an IAM role used by the nodes, and the security groups to enable communication between the masters and the nodes. Behind the scenes, EKS creates a highly available Kubernetes control plane spread across three availability zones. What’s important to note that this control plane also runs a highly available etcd cluster in a multi-az mode. The output from this step is the URL and ARN of the masters.
- Provision the workers — Customers have the ability to choose the worker node configuration from available EC2 instance families. Depending on the type of workload, a general purpose or a network/storage optimized instance can be chosen. The nodes are deployed based on an existing Amazon Linux AMI as an auto scaling group. Customers can also choose a different AMI that’s customized through Packer scripts. The IAM role assumed by these instances allows them to join the control plane created in the previous step.
- Configuring kubectl — Once the masters and workers are in place, we have to point kubectl — the CLI of Kubernetes client — to the API server exposed by the control plane. To authenticate and authorize the user, EKS expects slight modification to the standard configuration file. This is done by embedding the authentication extension which will pick up credentials from standard AWS CLI configuration file.
Amazon EKS team has retained much of the standard workflow and experience of dealing with Kubernetes. Anyone familiar with Minikube or other managed service such as AKS or GKE, can seamlessly switch to EKS.
Integration with Amazon VPC
AWS has built a Container Network Interface (CNI) plugin that integrates Kubernetes’ overlay network with AWS networking control plane. EKS runs a network topology that integrates with VPC. This extension enables customers to consider EKS deployment as a logical extension of their AWS deployments. Network Access Controls, Routing Tables, Private and Public Subnet topology gets seamlessly extended to applications running within Amazon EKS.
Unlike in other environments, the pods running in EKS get an IP address that belongs to the CIDR of the subnet that the node is deployed. These IP addresses are routable within the VPC, and they comply with all the policies and access controls defined at the network level.
Each node runs a daemon set that hosts the AWS-specific CNI control plane. Every time the kubelet schedules a pod, it asks the daemon set to allow an IP address. At this point, the CNI control plane creates a secondary IP address associated with the elastic network interfaces (ENI) attached to the node, and hand it over to the kubelet. This is fundamentally different from the typical overlay networks based on Flannel or Contiv.
The flipside of the above approach is that the nodes may run out of secondary IP addresses. There is a hard limit on how many ENIs can be attached to an EC2 instance. There is also a limit on how many secondary IP addresses can be created per ENI. These limitations force customers to plan the node configuration ahead of deployment. Of course, it is also possible to create a specific node group and attach it to an existing cluster.
The deep integration with VPC makes many customers choose EKS over DIY deployments done through Kops.
Extending AWS IAM to Kubernetes RBAC
AWS’ identity and access management (IAM) is one of the best identity platforms in the industry. The modular design based on users, groups, policies and roles deliver the right level of granularity for defining the authentication and authorization schemes.
Kubernetes also has a well-defined role-based access control (RBAC) in the form of service accounts, cluster roles, and role binding.
Amazon is leveraging an open source project called Authenticator for AWS built by Heptio. This authenticator bridges the gap between AWS IAM and Kubernetes RBAC. Based on the Webhook mode of Kubernetes authorization, the tool checks with IAM each time an operation is performed.
With version 1.10, the pluggable authentication providers are a part of the upstream Kubernetes distribution. The timing worked in favor of Amazon EKS which is exploiting this feature to support standard kubectl without any hacks.
The integration of Heptio Authenticator enabled EKS to seamlessly extend its existing RBAC to Kubernetes. It prevents hard-wiring of credentials into each node and also provides a mechanism that makes standard kubectl to work with EKS.
Amazon has been late to the Kubernetes party. But, the delay helped the AWS team to learn from existing implementations of managed Kubernetes. It is ensuring that EKS has the best in class integration with rest of the AWS services, which helps enterprise customers in adopting Kubernetes.
Feature image via Pixabay.