This is the third part of Janakiram MSV‘s four-part series examining the evolution of automated container services on Amazon Web Services. Read part one, and two; Check back later this week the final installment.
In the last part of the series, I compared Amazon Web Services’ Fargate/Elastic Container Services (ECS) with Kubernetes. In this part, we will take a closer look at the way Amazon Elastic Kubernetes Service (EKS) is extended to support Fargate. I will also explain how service discovery works between Fargate and EKS.
Before we get into the details of Fargate integration with EKS, let me revisit the design of Fargate which delivers serverless container capabilities to both ECS and EKS.
Amazon architected Fargate as an independent control plane that can be exposed via multiple interfaces. Instead of positioning Fargate as a separate service, the product team made it a launch type for ECS. It also enabled them to extend the ECS terminology of task and service to Fargate.
So far, ECS has been the only interface to deal with Fargate. But with Amazon EKS gaining momentum, the product team is now extending the Kubernetes control plane as an interface for Fargate. This integration between Fargate and EKS will enable Kubernetes users to transform a standard pod definition into a Fargate deployment.
As shown in the above illustration, Fargate becomes the core building block of AWS serverless container platform. An ECS task or a Kubernetes pod can easily find its way to the Fargate data plane.
The Fargate team at AWS has done an impressive job in extending Kubernetes to support the Fargate control plane. Let’s understand how it works.
Firstly, Amazon EKS got a set of controllers to talk to the Fargate control plane. When a pod targeting a specific namespace or annotated with custom labels hits EKS, Kubernetes uses the custom controllers to handoff the lifecycle to Fargate.
Behind the scenes, EKS uses the standard Kubernetes admission controller webhooks to handle custom scheduling of the pods. Once a pod is delegated to Fargate, the responsibility of managing the lifecycle is no more with Kubernetes. Kubernetes control plane only reports the current state of the pod while passing on the CRUD operations of the object to the Fargate control plane.
Since resource scheduling is handled by Fargate, it makes it possible to launch an EKS cluster with no worker nodes that only acts as an interface to Fargate. Interestingly, each pod that’s scheduled within a dedicated micro VM shows up as a new node of the cluster. Each Fargate micro-VM masked as a Kubernetes worker node runs the Kubelet that talks to the Kubernetes master nodes like any other worker node. This design closely resembles Microsoft’s approach of integrating Azure Container Instances with Azure Kubernetes Service through the Virtual Kubelet project.
In summary, the Fargate controllers that are integrated with EKS translate the YAML definition of the pod that is submitted, via kubectl, to a Fargate task.
A Look at the Fargate Profile
The key link between Kubernetes and Fargate is the Fargate profile which can be either created during the provisioning of the cluster (through the eksctl CLI) or added at a later point.
The Fargate profile contains the essential elements that associate Kubernetes with the Fargate control plane. The profile contains the below elements:
- Pod Execution Role: Since the pod eventually turns into an EC2 micro VM, we need to pass the role that the instance can assume to make calls to services such as ECR. Without this role, the Fargate agent/kubelet cannot talk to the AWS universe.
- Subnets: Even though the Fargate data plane runs in a hidden, private VPC, a subnet from the customer VPC is needed to route the inbound and outbound traffic. At this time, pods running on Fargate are not assigned public IP addresses, so only private subnets are allowed.
- Selectors: An entire namespace or a set of labels within Kubernetes may be associated with the Fargate control plane. Any pod that targets the designated namespace or a pod with the labels is a hint to EKS to turn that into a Fargate deployment.
From a Kubernetes Pod to a Fargate Instance
One of the advantages of using the pod spec for Fargate is the ability to map the required resources to an EC2 instance type.
For example, when the pod spec showed below is deployed in Fargate via EKS, it gets assigned to an EC2 micro VM with 1 vCPU and 2GB configuration.
- name: fg-eks-demo-container
If a pod has multiple containers with resource requirements explicitly defined, the Fargate scheduler rounds off the aggregate CPU and memory requirements defined under the limits section of all the containers in a pod and then chooses the right EC2 instance to place the task.
Service Discovery in EKS / Fargate
Assuming that the security groups are configured properly to allow traffic between the node groups and the subnet chosen for the Fargate profile, CoreDNS can facilitate DNS and service discovery across the Kubernetes and Fargate deployments.
I launched an EKS cluster with the default Fargate profile added by eksctl CLI. Later on, I added a managed node group with two worker nodes.
The eksctl utility will automatically configure the default namespace and the kube-system namespace for Fargate. Take a look at the fp-default profile created by eksctl.
Within the kube-system namespace, CoreDNS pods are scheduled to enable name resolution and service discovery. The first two nodes starting with fargate-ip-xxx name represent the Fargate instances running CoreDNS.
I then created a new namespace called demo that is not a part of the Fargate profile and then deployed an Nginx pod and service within that namespace.
So, at this point, we have a standard Nginx deployments exposed as a ClusterIP service running within the demo namespace.
Let’s now launch a pod in the default namespace which will get translated into a Fargate instance. This pod will run a Busybox container with network utilities such as nslookup and curl.
kubectl run curl --image=radial/busyboxplus:curl -i --tty
Within a few seconds, the pod moves into the running state and the node count increases showing that Fargate has assigned the pod to a new instance.
The first node that’s been up for 51 seconds is running the Busybox pod.
Now, let’s see if the Busybox pod (default namespace) running within Fargate can access the Ngnix pod (demo namespace) running in Kubernetes.
It clearly shows that the Fargate instances are able to talk to the pods and services running natively in Kubernetes.
What I absolutely liked about this approach is the fact that I can use standard Kubernetes primitives such as deployments and services without ever changing the specifications. Based on the target namespace and labels, EKS will figure out where to run my workload. We must appreciate the AWS Container Services team for this elegant design.
This pattern opens up many interesting opportunities. For example, I can run an HA database cluster in EKS as a statefulset while running the web and API frontend in Fargate. In one of my previous tutorials, I demonstrated how to run a stateful database backed by Portworx on a standard GKE cluster while running the frontend on Google Cloud Run. We can easily extend that use case to EKS and Fargate.
It should also be possible to access ECS services launched in the same VPC from Kubernetes. This capability combined with App Mesh makes it possible to deploy containers on AWS without worrying about the control plane primitives exposed via ECS and EKS.
This interoperable, control plane-agnostic deployment pattern makes it cheaper and efficient to run containerized production workloads in AWS.
In the last and final part of this series, I will discuss the limitations of EKS/Fargate with a brief comparison with Google Cloud Run and Microsoft Azure Container Instances. Stay tuned!
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.
AWS is a sponsor of The New Stack.
Feature image via Pixabay.