A 2020 Guide to Computing on Amazon Web Services
Thundra sponsored this post.
As cloud vendors vie for market share, Amazon Web Services seems to be winning the battle for cloud computing in terms of annual revenue.
However, choosing a vendor to move into the cloud with is just half the battle. The other half involves deciding which service best fits your specific application. Your choices range all the way from Infrastructure-as-a-Service (IaaS) to Functions-as-a-Service (FaaS). Ultimately, it comes down to whether you want to go down the route of serverless applications, or make use of containers — or even implement a hybrid architecture.
If these three services were to be put on a spectrum, where one end of the spectrum was containers and the opposite end was serverless, AWS Fargate would sit in between both of them. This is because AWS Fargate is a serverless container and can also be defined as a CaaS service.
So let us dive into these services, to understand what they have to offer.
AWS Containers in the Cloud
In 2014, following the success of Kubernetes, AWS launched its own container management service called Amazon Elastic Container Service (Amazon ECS) — allowing you to manage the orchestration of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Since then, we have seen an increase in the interest of EC2 containers. So what was the big deal with ECS and EC2?
Firstly, ECS is simply a container orchestration service. It allows you to visualize EC2 containers in the form of tasks, where a single task is one or more EC2 instances that already have Docker installed in them. Each EC2 instance with Docker then communicates with the AWS backend. Several EC2 instances forming a cluster will run within the ECS auto-scaling groups, with scaling rules that you define. That means the ECS Container Agent is continuously polling the ECS API, to check which containers need to be stopped or run according to the task requirements. All of this seems alluring, but the problem is that you still have to manage each EC2 instance — and this is where the difficulties begin.
EC2 orchestration, just like any other container orchestration, is a daunting task; giving AWS Lambda the upper hand in this aspect. Even though ECS makes it easier to manage tasks, you still have to perform management on the container level. You still have to manage scaling, monitoring, securing, networking, and other operational issues of the EC2 instances. The management at the container level not only makes using containers an operational burden, but also vulnerable security-wise and unreliable performance-wise.
For example, even though you may have specified fitting rules for the ECS auto-scaling groups, automatically increasing or decreasing the tasks as per the needs, the EC2 instances themselves may not have enough memory or CPU provisioned to them. Additionally, there is no clear metric to scale EC2 clusters and also no proper solution regarding the scaling, when task allotment fails due to lack of EC2 cluster resources. Another problem is scaling down EC2 clusters without killing any tasks.
These operational burdens are not only seen with AWS EC2, but across all of the container services out there. With all of his work, when would you have the actual time to concentrate on your business logic? This is why AWS introduced Fargate, bringing you salvation by abstracting all of those container orchestration responsibilities.
AWS Fargate presents Containers-as-a-Service (CaaS), as compared to the Infrastructure-as-a-Service (IaaS) that EC2 is. That means the containers have already been set up, including the networking, security, and most importantly the scaling. These major operational burdens are abstracted away, providing you with the ability to run containers directly on the cloud. With this service, you simply have to specify the resources for each container instance and let Fargate work its magic under the hood.
At the end of the day, each Fargate instance comes with its dedicated ENI to allow communication between inter-task clusters, whereas clusters of the same task are communicated via localhost. Moreover, the management of these tasks is again done by ECS. In fact, Fargate is defined as a compute engine of ECS, providing a different way of managing tasks; and this is the defining characteristic of Fargate linking it to container services. However, this is only one side of Fargate, there is an entire serverless side too.
AWS Resources on Demand with Serverless
So, AWS Fargate lets you run containers directly in the cloud. But how? Well, this is where the serverless part of the service comes into play. AWS Fargate can be considered a subset of AWS’ serverless compute services. That means instead of going to the other extreme of the spectrum, you can now take advantage of serverless without having to leave the flexibility of containers.
Terming Fargate as a serverless compute service also breaks one of the greatest misconceptions of the concept. Many believe that serverless equates to Functions as a Service (FaaS). The misconstrued association is due to the success of AWS Lambda, and so AWS Lambda functions became synonymous with the serverless concept.
But a service can be defined as serverless if it possesses the following three features:
- Server management is abstracted to a vendor;
- Pay-as-you-go model, where you only pay for what you use; and
- Automatically scalable and highly available.
Considering the above-mentioned properties, AWS Fargate is truly serverless. This is because, as already stated, with CaaS all of the underlying architecture up till the container level is abstracted to the vendor. Furthermore, similar to AWS Lambda, Fargate also follows a pay-as-you-go model. The difference though is that with the Lambda service, billing is calculated per invocation whereas; with Fargate, you are charged according to the vCPU and memory you consume per second. Finally, the distinct and most crucial characteristic that Fargate possesses, justifying its serverless tag, is the auto-scalability feature. Similar to AWS Lambda, Fargate is also scalable and highly available; and this is expected, since both services have AWS Firecracker running under the hood.
Officially released at AWS re:Invent 2018, Firecracker is a greatly powerful virtualization tool that uses a Kernel-based Virtualization Machine (KVM). Designed to be secure and extremely lightweight, the technology has allowed AWS to enhance the serverless experience for both its Lambda and Fargate services. According to AWS Chief Evangelist Jeff Bar, Firecracker is “what a virtual machine would look like if it was designed for today’s world of containers and functions.”
Hence, with the support of Firecracker, AWS has brought CaaS into the fold of serverless computing services. The myth of serverless only meaning FaaS is now rightly being challenged, ushering in a new era of solutions into the domain. However, this was long overdue as the limitations of AWS Lambda have been acting as a deterrent for many to move their architectures towards serverless. This is where Fargate has some benefits over Lambda services.
The cloud has dominated the conversation of how software systems are now being built. Vendors such as AWS, Azure and Google Cloud are producing solutions to facilitate this development in the cloud, providing a myriad of services to meet the needs of different applications. AWS alone provides three primary services across the computing spectrum of IaaS to FaaS. The question then is, which service best fits your use case and business model? This requires comparing the different services to understand their desirability as compute solutions in different use cases.
This is what we at Thundra have explored, by analyzing and comparing the different AWS services. Our team has written about the various comparisons that can be made between the three services and documented their findings in Thundra’s whitepaper on the topic, which you can download now.
Amazon Web Services is a sponsor of The New Stack.
Feature image via Pixabay.