Epsagon sponsored this post.
A decade ago, software deployments only took place infrequently, mainly due to the complexity of server provisioning. When cloud providers came along with computing services, such as virtual machines, remote storage, and virtual networking, an important shift occurred: IT managers realized that by using these new cloud services, the costs for the underlying hardware could be reduced, and the ownership of the infrastructure could be a shared responsibility between IT infrastructure teams and developers.
This article will discuss the evolution of application deployments on the cloud and will also present some uses cases for selecting managed services on AWS.
If you look at the software architecture of many legacy apps, you will most likely see they were built with a monolithic approach, consisting of a big codebase with different modules, usually having one database as a persisting layer. For the deployment of this kind of application, the strategy was similar compared to cloud deployments:
- A pipeline for provisioning the server;
- A pipeline that pushed the codebase to the target server;
- A pipeline to talk to the database engine in order to apply database schema changes;
- Both services placed behind a firewall;
- Scaling performed vertically most of the time.
In fact, this strategy is still utilized for some apps when monolithic architecture still makes sense. But the difference today is they are now being deployed mostly by cloud providers, usually virtual machines (e.g., AWS EC2) or managed compute services, such as AWS Elastic Beanstalk.
When microservice architecture came onto the scene, deployments became far more complex. Thus, the strategy for deployments was affected significantly:
- The provisioning was not only required for a single machine but multiple machines;
- Synchronization between teams was more complex;
- Monitoring and logging became complex due to the number of components.;
- Having homogeneous pipelines was hard since every team/service was independent;
- There was not just one but many databases (usually one per each service);
- More IT resources were needed: discovery services, load balancers, etc.
In general, more DevOps work and synchronization were required. While it takes a lot of effort to manually provision infrastructure for microservices, many companies still do this in order to meet their business requirements. However, as discussed below, managed services were a welcome arrival, as they significantly reduced a team’s workload for implementations.
Modern Cloud Deployments
Today, the agile era and the high rate of microservices adoption have motivated cloud providers to offer many new services that make implementations easier. They have done this mainly by taking the required server maintenance out of the equation and allowing teams to use these different services with minimal configuration required. These are called managed services.
A managed service is a cloud feature that you can use without having to take care of the underlying hardware’s administration. For instance, in the Amazon ecosystem, you will find AWS Fargate, AWS Lambda, AWS Aurora, Amazon DynamoDB, and Elastic Beanstalk, among others. What do all those services have in common? The service provider, and not your organization, is responsible for getting deployments up and running on these platforms.
Modern deployments consist of applications running on a mixture of services/integrations. This mixture will still be used in the future, but there will be a tendency to use more managed tools and APIs cloud providers offer. Just to mention an example: a regular Django application can usually be deployed on AWS Fargate with an OAuth2 integration for Google APIs (maps, translations, etc.).
Even though managed services are considered great for scaling and ease of use, they come with a higher regular cost and less customization. Still, in comparison with the engineer-hours saved, they are definitely worth it.
The AWS Ecosystem for Managed Services
This section will present some guidelines to help you select different managed services on AWS, based on common infrastructure and app-level requirements.
There are many reasons to containerize applications: portability, isolation, distribution, etc. If it happens that you are migrating from monolith to microservices, you may want to use containers as a first step for splitting components into small independent services. Also, if you are creating an application from scratch via a microservices-based architecture, chances are, you will want to have a cluster of containers that represent different features/services.
Regardless of the reason or use case for containerizing, AWS offers a few managed services that you can use for your deployments:
If you want to go with containers without any need for orchestration and still want to have IAM and other AWS integrations, then AWS Fargate is what you need. What’s great about Fargate is that you don’t have to take care of the EC2 instances in which the containers will be run. Fargate completely frees you from any orchestration work. You create a Docker image, push it to AWS ECR, and then define a Fargate task specifying the image you want to use. By doing this, you can get hundreds of containers serving your application. Go ahead and take a look at the official guide to “Getting Started with Amazon ECS using Fargate” for more information.
Be aware that Fargate comes with one big disadvantage: vendor lock-in. Once you go with this approach, you are obliged to stick with it since there is basically no equivalent service on other clouds.
What if you would prefer a more standard containerized solution in order to run your application on different cloud providers? If losing some level of integration with a few AWS APIs is not an issue, then AWS EKS is what you need to check out. AWS EKS gets you covered with the master node, but you’ll still need to take care of defining the worker nodes. This means that you will need to define various elements such as your pods and services, which will require a lot of configuration work, thus representing the downside of deploying apps with AWS EKS. But given the increase in capabilities you get in return, the extra work is worth it.
Another positive of AWS EKS is that moving your cluster to another cloud provider will not be that hard since Kubernetes Deployments are standard, meaning the configuration for your nodes will still work on other clouds.
If your application (or part of it) consists of a set of small functions that can be executed in an isolated fashion and don’t need to be running all the time, AWS Lambda is the way to go. Lambda allows you to easily define small services with zero server provisioning. Other benefits of going with FaaS are:
- You will save money due to the fact that functions will not run all the time;
- It comes with AWS integrations, so, for instance, you can talk to an AWS database with little code;
- An API gateway comes by default, so you can take advantage of it for securing your endpoints;
- It auto scales by default.
There are many use cases for AWS Lambda. Among others, you can use it for:
- Backend mobiles and small web APIs;
- Tasks that react to events on other AWS services;
- Automated jobs.
Lambda is not for all cases. Even though Lambda supports many languages and allows you to attach decent resources to your functions, you still need to worry about cold starts, runtime limits, vendor lock-in, etc. Remember, functions are typically for small services. Also, keep in mind that they will scale as much as the code inside the function scales.
If you are starting with AWS Lambda, please check this link out for some considerations.
API Services Integration
Apart from containerized solutions and Lambdas, AWS offers many other services for specific use cases, for example:
- AWS S3: Object storage service;
- Amazon Aurora: Relational database engines;
- AWS CloudFront: CDN service.
It will always be far easier to deal with managed services than to deploy your own solutions. Moreover, don’t limit yourself to AWS services only. There are many cloud companies out there with great offerings. Epsagon, for instance, has an automated solution to help you monitor and troubleshoot distributed cloud applications.
The future of the cloud will most definitely be hybrid and managed. Managed services allow teams to focus more on code and business logic than on infrastructure. And by implementing external API integrations, they can avoid having to reinvent the wheel and instead be able to react faster to market needs. Options are out there. Your task is to analyze them and see how they fit within the cost/benefit parameters your business demands.
Hopefully, this article has helped you obtain further background for better-selecting services on the AWS ecosystem. Take your time evaluating them, and use them wisely.
To learn more about automated monitoring for hybrid cloud applications, request a demo of Epsagon.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.