Over the past few months, one particularly interesting trend we have been witnessing is the vast and growing number of cloud native applications that end up running on Amazon Web Services Fargate. Fargate is a serverless computing platform focused on containers. In terms of abstraction of the computing paradigm, Fargate sits between Lambda and EC2. Fargate has a lower abstraction than Lambda, to which its users provide pretty much pure code to run. But the focus on containers makes Fargate much higher level than EC2, in which you manage virtual machines. What has been happening is that, for a number of reasons, but with a common theme related to costs of operations, many users end up electing Fargate as their favorite computing platform.
AWS Fargate is a layer above Elastic Container Service and underneath recently released Elastic Kubernetes Service on Fargate offering. Fargate orchestrates container-based workloads. Simply put, Fargate allows you to run containers without having to manage the hosts that run those clusters or the container engine that is powering them (that is, Docker).
Fargate on ECS allows you to schedule ECS Tasks on top of ECS infrastructure run by AWS: you tap most functionalities of ECS, while giving up control and responsibility for its infrastructure. An ECS Task is a group of containers that share resources like volumes and networking, not unlike Pods in Kubernetes.
ECS Tasks can be run as one-offs, but more often than not, they are grouped in ECS Services that provide fundamental capabilities for cloud native applications like auto-scaling, load-balancing, service discovery and throttling. That is, Fargate has a lot of fundamental functionality for cloud native applications baked right into the platform.
In terms of operations, Fargate users are responsible for the content of the containers, the grouping of containers into ECS Tasks, the definition of ECS Services and ECS Clusters and the networking thereof. The networking of Fargate, and the fact that it is so easy to get an IP address on the public internet for every container, means that Fargate adopters need to think much more about the network design. Also, far more fine-grained security groups are possible, defined on a per-container basis, rather than by host.
The observability requirements for Fargate are similar to those of other microservice environments: one needs end-to-end visibility of how requests are served (for example, via distributed tracing), as well as actionable information on the health status of your ECS Tasks and ECS Services. As the application spreads over several ECS Services and Tasks find each other via service discovery and that communicate overload balancers, VPCs, it becomes close to impossible to keep an accurate idea of the factual architecture of your map and to troubleshoot issues without very accurate distributed tracing and the depth of insight it provides.
As far as metrics are concerned, those related to the AWS Fargate runtime are available to some extent inside the containers themselves, which are provided with, among other things, a Docker metrics endpoint built into the ECS Task Metadata API. Those metrics, plus more, are also available from CloudWatch Container Insights. And insofar as custom metrics are concerned, they can push them to the metrics store of choice.
In terms of logging, one can push the logs to CloudWatch or FireLens, among few others supported log drivers, or run a sidecar (which costs additionally in terms of CPU and memory) to have the logs pushed somewhere else.
What Can Fargate Do Well?
So, when should you be using Fargate? The answer is relatively easy: when you have pretty much anything that can run into a container, unless you really want to manage the infrastructure. In contrast, the same, notably, cannot be said about AWS Lambda, on which people mostly deploy software that is designed to run on Lambda due to peculiarities and restrictions of the runtime. But with Fargate, if your software can today run on Docker, chances are that the migration for Fargate will be rather smooth.
When NOT to Use Fargate
We have witnessed cases in which IT or Security policies prevented migration to Fargate. This seems to happen mostly with security policies that have not been revised in a few years and are still assuming virtual machines as the unit of computation. (For the record, while there is theoretically a chance that an antivirus may catch something when running in a container, it has never happened to us or anybody we know.)
We also hear from some customers that they value independence from the particular cloud provider more than they value the convenience and integration provided by a vendor-specific computing platform like Fargate. In many such cases, the end user nowadays tends to go with a managed Kubernetes offering, which reduces vendor lock-in. (Side note: This seems to be precisely the reason for the existence of EKS on Fargate: to offer a Kubernetes-like API, albeit with limitations but also having a smaller lock-in factor, and still offer the management of the underpinning infrastructure.)
In one case we have witnessed, the application could run in a Docker container, but only if the kernel was tweaked in very specific ways, which prevented that application from being ported onto Fargate because, well, the kernel is way off-limits. The same lack of control over the kernel also applies to the Docker engine, which prevents you from using, for example, arbitrary log forwarders that skip over CloudWatch unless you run another container (and pay for its computing resources) to push your logs somewhere else.
A noteworthy mention for what could prevent a move to Fargate, especially from EC2, is actually less related to technology and more to people, organizations and the endangerment of established roles and responsibilities in organizations beginning their cloud native journey. Jaime Dobson’s “Day 2 Problems” blog post explains it better than we ever could:
“Many existing mental models don’t hold and so infighting becomes rampant. Alignment becomes difficult as the temptation of returning to the old ways of working kick in. Fear rises as those whose jobs are at risk become aware of the coming danger.”
The cloud native journey is indeed fraught with challenges.
What Motivates Moving from Lambda to Fargate
Software that can run on Lambda can, with few adjustments, also run on Fargate (the converse, on the other hand, is not true: software needs to be developed in particular ways to run on Lambda). From a technical perspective, the reasons to move off Lambda are fundamentally the following:
- Control over the runtime environment: in Fargate, you provide an entire container image, including an OS kernel and a user-space. This gives far more control over your application and the environment it runs in than what Lambda can offer, at least short of writing a custom Lambda runtime, and we have yet to meet anybody that created a custom runtime to run Lambda functions, as it is quite an undertaking comparable to writing your own buildpack on Heroku or Cloud Foundry.
- Long-running jobs: Lambda is not designed (or priced) for functions that run long. If you foresee batch jobs taking several minutes or hours to complete, you are better off with scheduled tasks on Fargate.
- Easier to avoid cold-starts: while, at the platform level, cold starts in Lambda seem to be a mostly solved problem, avoiding considerable start-up times inside the function itself still requires care and sometimes limits the kind of dependencies and frameworks you can adopt.
- Reduce lock-in: if you can run your application inside a Docker container, you can probably make it run on Fargate with low effort. Lambda, instead, is an environment-specific to AWS, and porting your functions to a different Serverless offering reportedly requires some non-trivial work, especially because of the tendency of Lambda functions to be coupled to services specific to AWS; but to be fair, the same can be said for pretty much any current serverless platform offered by a Cloud Provider.
Sometimes the aspect of billing is raised when comparing Lambda and Fargate. However, the billing models of Lambda and of Fargate and the software running on the two platforms are so drastically different, that any comparison needs to be based on realistic measurements of real workloads. Nevertheless, chances are that, at a large scale (in terms of requests), you could save money on Fargate using runtimes that scale well under load, for example, Node.js or a long-time favorite of ours: reactive programming in Java.
What Motivates Moving from EC2 to Fargate
To set expectations: if you can use Fargate rather than EC2, you probably should. The labor cost of managing and updating the Virtual Machines, even when running your own AMIs tailored to your needs, is very significant when compared to the agility of containers.
Security is also a factor. Although containers on Fargate also contain fundamentally an OS, the fact that it is so easy and fast to rotate out versions of containers, as opposed to spinning up and tearing down Virtual Machines, means that in case of security issues, you are faster to reach with containers than VMs. Also, you can make your containers immutable and minimize the attack surface with approaches like Distroless and container-optimized OSes. We work with customers that tell us that their security teams have very long processes for allowing deployments on Virtual Machines, but that most of those practices (like antiviruses, to name one), are not required in containers, and that is a lot of effort spared.
In other words, EC2 is something you probably should use when no other computing platform satisfies your needs. One reason for this can be the amount of control you can exert on EC2 machines. If you must run your own OS, customized just so, then EC2 is the way to go. Or maybe you are running some “exotic” databases by yourself, which is often a necessary evil of lifting-and-shifting applications to the cloud, at least in the first steps. Other cases involving databases are those that, while not being particularly exotic, have extremely high resource requirements in some configurations, needing hundreds of gigabytes or even terabytes of memory on one single host. Another common use-case, is when you are running your own platform-as-a-service on top of AWS, such as Cloud Foundry or VMware Tanzu.
In this article, we discussed how Fargate, with its Container-as-a-Service approach, seems to sit at a sweet-spot in terms of trade-offs for many organizations. We covered what Fargate is great at, and what could be the reasons for you to rather stay on Lambda or EC2. To summarize our position: you should use the highest-level abstraction you can afford. If your code can run in Lambda, you probably should just do that (but chances are you wrote that software to run on Lambda anyhow). And, at scale, after careful deliberation, you may want to consider moving to Fargate. Otherwise, if your applications can run in a Docker container, go Fargate. Unless you really, really must use EC2.
Amazon Web Services, Cloud Foundry and VMware are sponsors of The New Stack.
Feature image via Pixabay.