Cloud Services / Containers / Kubernetes

AWS Fargate Through the Lens of Kubernetes

13 Dec 2019 9:21am, by

This is the second part of Janakiram MSV‘s four-part series examining the evolution of automated container services on Amazon Web Services. Read part one here; Check back in the week to come for future installments. 

Amazon Web Services released Fargate in 2017 to simplify the workflow involved in running containerized workloads. Originally launched for Amazon Elastic Container Service (ECS), Fargate is now extended to the Elastic Kubernetes Service (EKS) enabling Kubernetes developers and users to run containers in a serverless and nodeless environment.

While AWS Fargate is an abstraction layer, the actual orchestration is done by ECS.  The key difference between choosing plain vanilla ECS and Fargate is the way the EC2 containers are exposed and managed. With Fargate, you never get to see the underlying EC2 instances while ECS will launch the instances in an Amazon Virtual Private Cloud ( VPC) within your account.

At the core of Fargate is the RunTask API that takes the specification and schedules the task in an EC2 instance. The specification contains the image name, CPU shares, memory, environment variables, entry point, and the command line arguments.

Once the task is scheduled, the CreateService API is invoked to run and maintain a desired number of tasks. When the number of tasks running in a service drops below the threshold, the scheduler creates another copy of the task within the specified cluster.

An Application Load Balancer (ALB) can be associated with the service to route the traffic to the desired port.

Just like a PaaS hand over a URL after the deployment, Fargate configuration ends with the publicly accessible ALB CNAME that’s used to access the workload.

Here is a high-level summary of the steps involved in deploying a container image in Fargate:

  1. Push the Docker image to Amazon Elastic Container Registry (ECR).
  2. Create a task definition based on the above image with the desired CPU, memory, and port configuration.
  3.  Create a Fargate cluster associated with a VPC and subnet. Note that the cluster will not run EC2 instances but is used for routing the traffic to the workload.
  4. Launch an ALB and point the listener to the container port.
  5. Finally, create a service definition with desired task count and associate it with the ALB.

At this point, it is pretty clear that a Fargate task closely resembles the Kubernetes pod while the service may be mapped to a replica set or a deployment in Kubernetes. Similar to a Kubernetes deployment, an existing Fargate service can be scaled out or scaled in through the UpdateService API.

Fargate on ECS Architecture

Both Kubernetes and ECS are mature orchestration engines that deal with the lifecycle of containerized workloads. Similar to Kubernetes master nodes, ECS has a control plane that handles the orchestration. The worker nodes of Kubernetes are comparable to the data plane of ECS that runs EC2 instances.

Let’s take a closer look at the data plane which is the workhorse of Fargate.

For Fargate, AWS pre-provisions a fleet of EC2 instances within a dedicated VPC which is not accessible to us. Since launching an EC2 instance just-in-time takes longer, the fleet acts as hot standby. When a task definition hits the control plane, one of the EC2 instances that match the spec is handpicked to schedule the containers. Amazon ensures that the pool is large enough to run scheduled tasks. To overcome the limitations of the number of Elastic Network Interfaces that can be attached to the subnet, AWS may create additional VPCs for the data plane.

Each EC2 instance launched in the Fargate data plane runs Amazon Linux 2 that has Docker runtime along with an agent that manages the two-way communication with the control plane. This agent is responsible for pulling the images from the registry and calling the Docker APIs to manage the lifecycle of each container defined in the task.

By now, it is clear that each EC2 instance in the data plane closely resembles a Kubernetes worker node. The agent running in an instance does exactly what the Kubelet does for the worker node.

When it comes to the control plane, it is not different from the Kubernetes architecture. Let’s take a closer look at it.

The AWS Console, SDK, and the CLI talk to an API endpoint to invoke the RunTask API which is exposed by the frontend service. This endpoint is obviously load-balanced and is highly available. Every client primarily talks to this endpoint to manage the lifecycle of a workload. This component is also responsible for authenticating and authorizing the clients.

The frontend service is comparable to the API server component of Kubernetes. Kubectl primarily talks to this endpoint which is also responsible for the access control of the cluster.

Once a task definition is submitted via the frontend service, it goes to the cluster manager which is responsible for managing the desired state of the cluster and tasks. The cluster manager will check with the capacity manager to ensure that an instance is available to schedule the task. Once the desired capacity is reserved and the capacity manager returns the pointer to the instance in the data plane, the workflow moves to the data plane.

The cluster manager now talks to the Fargate agent running inside the chosen EC2 instance within the data plane to schedule the tasks and scale them based on the desired state configuration.

The state of running tasks, services, and clusters is centrally maintained in the state database which acts as the single source of truth for the control plane. Each Fargate agent periodically reports the state of the tasks and services to the cluster manager which gets updated in the state database.

The cluster manager is modeled around the Kubernetes controller. For deployments, daemonsets, statefulsets, and other controllers, this component will ensure that the desired state is maintained by the cluster. The state database in Kubernetes is based on etcd key/value pair which is a lightweight in-memory database.

Let’s switch the context to Fargate/ECS. The final component of the control plane is the capacity manager which is responsible for managing the fleet of EC2 instances. It also makes the decisions for scheduling the tasks on instances that match the task definition specification. It launches additional instances when the capacity is low and recycles them when tasks are deleted.

The capacity manager service of Fargate is comparable to the scheduler component of Kubernetes. It is responsible for watching the newly created pods that have no node assigned, and selecting a node for them to run on. It also makes scheduling decisions based on hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.

Below is the complete mapping of Fargate/ECS terminology to Kubernetes.

In the next part, we will explore the integration of Fargate with EKS. Stay tuned.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

AWS is a sponsor of The New Stack.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.