Comparison: AWS Fargate vs. Google Cloud Run vs. Azure Container Instances
Having taken a closer look at Amazon Web Services’ Fargate on its Elastic Kubernetes Service (EKS), let’s understand the limitations of the platform followed by a quick comparison with other serverless container platforms.
5 Key Things to Consider with AWS Fargate on EKS
1) Use Appropriate Namespace or Labels in the Pod Definition
Remember that an EKS cluster with a Fargate profile doesn’t need to have worker nodes. You just need the master nodes exposing the control plane. The Fargate profile attached to EKS cluster associates one or more namespaces and labels with the Fargate control plane.
When a pod definition is submitted to EKS that doesn’t target a namespace or doesn’t contain the labels, it never gets scheduled by Kubernetes. It will be stuck in the pending state until you attach a node group with the worker nodes. To make sure that the pod makes its way to Fargate, ensure that it matches the namespace and/or the labels defined in the Fargate profile.
2) Configure ALB to Access Pods from the Public Internet
As of the current release, Fargate on EKS can launch microVMs in a private subnet of a VPC that doesn’t have an internet gateway attached to it. The EC2 instances running the pods don’t get associated with a public IP. This limitation restricts access to the pods deployed within Fargate. In order to access the pods from the public internet, create a ClusterIP service associated with the pods, and configure an Application Load Balancer (ALB) with the listeners pointing to the ClusterIP service. This is the only mechanism to access the pods deployed in Fargate.
3) Leverage the Sidecar Pattern to Emulate Daemonsets
While you can run containers packaged as Kubernetes pods and deployments in Fargate, you may not be able to launch them as daemonsets. If you need to run a container on each node, turn that into a sidecar within a pod. This pattern will emulate the daemonset controller within Fargate.
4) Use Vertical Pod Autoscaling for Dynamic Resource Optimization
Vertical Pod Autoscaler (VPA) frees the users from the necessity of setting up-to-date resource limits and requests for the containers in their pods. When configured, it will set the requests automatically based on usage which allows proper scheduling onto nodes that meet the resource requirement.
By configuring VPA on an EKS cluster with a Fargate profile, the scheduler can match the pod with a microVM that has sufficient CPU and memory. This mechanism will ensure that as the pod spec changes at runtime, it gets (re)scheduled on an EC2 instance with the right vCPU and RAM.
5) Rely on External Services for Persistence and State Management
As of the current release, pods on Fargate don’t support persistent volumes and persistent volume claims. Which means you can only run stateless services on Fargate. To manage state, you can use AWS services such as S3, DynamoDB or ElastiCache with Fargate. Make sure that the pod execution role of the Fargate profile has sufficient permissions to talk to external services.
In the next section, we will compare AWS Fargate on EKS with Google Cloud Run and Azure Container Instances. Since all of them are built on top of Kubernetes and generally available, it is fair to compare them side-by-side.
AWS Fargate/EKS versus Google Cloud Run
Google Cloud Run is a serverless container platform available on Google Cloud Platform. It is built on top of Knative Serving to simplify the developer experience.
Cloud Run is available as a stand-alone, fully managed service that doesn’t need an existing GKE cluster. Cloud Run for Anthos is available on GKE or GKE On-prem which has a dependency on Kubernetes. Google built Cloud Run to deliver PaaS-like experience to developers. As soon as a container image hits Cloud Run, it creates a publicly accessible URL.
AWS Fargate/EKS is comparable to Cloud Run for Anthos. Both run in the context of Kubernetes with access to the rest of the objects running within the cluster. Cloud Run doesn’t directly support Kubernetes pod as a deployable unit while AWS Fargate can accept a pod definition. Cloud Run supports auto scale and scale-to-zero which is a unique value proposition of Knative Serving. For background on Knative Serving, refer to one of my previous articles. AWS Fargate doesn’t support auto scale and scale-to-zero out of the box. But, it can be configured with Horizontal Pod Autoscale (HPA) or Vertical Pod Autoscale (VPA).
When running in the context of Anthos, Cloud Run provides the default isolation of a Kubernetes pod. Whereas managed Cloud Run service uses gVisor-based isolation.
Cloud Run only schedules one container at a time. If you are running a multi-container pod, you have to launch each container separately.
Unlike AWS Fargate/EKS, Cloud Run takes just a few seconds to deploy a container and expose it on public URL. It also supports switching the traffic across multiple revisions of the deployment based on unique tags of the container image.
Cloud Run scores high in terms of developer experience. It’s one of the fastest and best serverless container platforms available in the public cloud.
AWS Fargate/EKS versus Azure Container Instances
Microsoft was the first in the industry to launch serverless containers in the public cloud through Azure Container Instances (ACI). The platform schedules the container in a highly-optimized, lightweight VM that may be optionally associated with a public IP address. ACI offers a similar experience and workflow to the Docker Run command.
Like AWS Fargate/EKS, the isolation level of ACI is a VM which delivers better security. But, ACI cannot accept an existing Kubernetes pod definition. It has its own specification that mimics the pod spec.
ACI has a concept of container groups where it is possible to deploy multiple containers into the same VM. This design is similar to how AWS Fargate/EKS schedules all the containers mentioned in a pod in the same microVM. The ACI container group is modeled around the pod spec that may have multiple containers within a deployment.
Thanks to Virtual Kubelet, a project that bridges the ACI control plane with Kubernetes, it is possible to list the ACI VMs as Kubernetes nodes. When we run kubectl get nodes on a cluster configured with Virtual Kubelet, each container running in ACI shows up as a node. Though AWS doesn’t use Virtual Kubelet for Fargate on EKS, it delivers a similar experience.
ACI can be associated with Azure File Share to expose an existing mount point within the container. This integration delivers persistence to ACI instances out of the box.
ACI may also use NVIDIA GPUs to perform AI acceleration which makes it an ideal candidate for running ML inferencing. Fargate doesn’t support GPUs yet.
|Capability / Feature
|Google Cloud Run
|Azure Container Instances
|ACI Native Spec
|Yes (Azure File Share)
|EKS / ECS
|AKS (Virtual Kubelet)
|Public IP / CNAME
|In-built Auto Scaling
|Yes (EKS + HPA)
|Yes (AKS + HPA)
|Virtual Network Access
|Azure Monitor Logs
|Revisions / Versioning
While the concept, design, and architecture of AWS Fargate on EKS are elegant, it lacks an intuitive developer experience. Since there is a dependency on an EKS cluster, it takes at least 20 minutes to deploy an existing pod definition on Fargate. When configured from the AWS Console, the creation of a pod execution role, a private subnet, and the namespace makes the service less productive. The creation of an ALB, associating that with the private subnet running the Fargate pod is too much work to expose a workload on the public internet. Though the latest release of eksctl takes care of the plumbing, you will still need to use the standard aws CLI or the AWS Console to create the ALB. Switching between eksctl, aws, and kubectl reduces developer productivity.
The promise of a serverless container platform is to deliver developer experience similar to that of PaaS. AWS Fargate on EKS requires DevOps to do quite a bit of heavy lifting before the developers could deploy the first pod. Amazon may eventually launch a managed Fargate service that makes EKS and ECS completely invisible by just exposing the topmost layer of the stack. Imagine the ability to submit a Kubernetes pod to an environment that doesn’t force you to launch a cluster beforehand.
Both Google Cloud Run and ACI squarely focus on the developer experience by hiding the infrastructure operations. Both of them promise to hand over a URL as soon as the container image is deployed.
AWS Fargate on EKS is a step in the right direction. But it has to improve the developer experience.