Twistlock sponsored this post.
Serverless functions and containers are two of the hottest topics in the IT world today. They’re also two technologies that share a lot in common — after all, both are ways to deploy code inside isolated, discrete environments. They are by no means identical technologies, but in the abstract, they function in similar ways.
Much confusion exists about best practices and security management and how they differ between serverless and containers. Among the issues to consider, you must decide how to change your architecture strategy when dealing with serverless functions as opposed to containers.
This article answers those questions by comparing and contrasting serverless and containers. We’ll provide an overview of what these two technologies have in common and explain how deployment, management and security strategies for serverless workloads and containerized workloads compare.
What Is Serverless, What are Containers and What Do They Have in Common?
A detailed definition of serverless computing and containers is beyond the scope of this article. But here are quick definitions:
- Serverless computing refers to an architecture in which code is executed on-demand. Serverless workloads are typically in the cloud, but on-premises serverless platforms exist, too;
- Containers provide portable environments for hosting an application, or parts of an application. The most common container platform today is Docker, although the containerization concept dates back to the introduction of the chroot call to Unix in the late 1970s.
While serverless functions and containers are designed to meet different needs and are deployed using different tools, they share a lot in common:
- They allow you to deploy finite pieces of code and are therefore well suited for microservices architectures;
- They are easy to deploy across distributed architectures. For that reason, you commonly see them being used in the cloud;
- Serverless functions and containers start quite quickly (usually within a few seconds);
- Both rely heavily on APIs to coordinate their integration with external resources;
- Both do not typically have built-in persistent storage; instead, they rely on external resources for persistent storage needs;
- They are frequently used to build immutable infrastructure (although strictly speaking, not all serverless or containerized architectures are necessarily immutable).
The list could go on, but these are the essential traits that containers and serverless functions share in common.
Managing and Securing Serverless vs. Containers
Given the similarities described above, you might think that the strategy you use for managing and securing serverless functions can be employed for containers, too. You’d be right — to an extent.
Following are the key components of a software management and security strategy that apply to containers as well as serverless functions:
- Dynamic baselining. In both a containerized environment and a serverless one, there is no such thing as “normal.” Instead, the number of containers or serverless functions running at a given time and the level of communication between them fluctuates constantly. That is why it’s critical to leverage monitoring and security tools that support dynamic baselines — meaning they can adjust automatically to recognize anomalous behavior, even in environments that are constantly changing;
- Third-party dependency management. It’s common for both containers and serverless functions to import third-party code when they run. For that reason, managing and securing code from upstream sources is critical in both contexts. That means knowing where the code comes from and gaining early awareness of any stability or security problems associated with it so that you can fix the issues before they cause problems;
- Access control. Although serverless functions and containers run inside environments that are relatively isolated from each other and the host server, that isolation is not absolute. A serverless function or container that experiences a performance problem or security breach could affect other resources in undesirable ways. That’s why it’s critical to take advantage of access-control systems to lock down which resources your functions and containers can access. You don’t want a coding flaw or security breach inside your serverless function or containers to lead to massive consumption of cloud resources, for example, or to crash another container or server.;
- API testing and security. Since APIs are so important in the context of both containers and serverless, testing and securing APIs is critical for both types of environments.
In other respects, however, serverless and containers require fundamentally different management and security techniques:
- Managing and securing the host environment. With serverless, end-users don’t really need to worry about (or typically have much control over) the host server and operating system on which their functions run. (That’s why it’s called serverless, after all.) In contrast, when you use containers, it’s critical to ensure that your containers themselves, the Docker environment and the host operating system are stable and secure;
- Resource consumption. The types of workloads that are deployed using serverless functions tend to consume large amounts of resources for short spans of time. What this means from a management and security perspective is that avoiding unnecessary resource consumption or execution time for serverless functions is very important if you want to keep your computing bill manageable. Efficiency is important with containers, too, of course, but not quite as much, given that containerized applications or services are usually designed to run for longer periods of time and they may not consume resources constantly;
- Cloud frameworks. Although serverless functions can be deployed on-premises in certain cases, in most situations today, serverless workloads run in a public cloud using a service like AWS Lambda or Azure Functions. That means that the number of tools available for managing and securing those functions is somewhat limited. You are stuck with the tools offered by your cloud vendor (which are usually limited in functionality), plus third-party tools that are compatible with the cloud you’re using. Containers can pose the same challenge when you use a cloud-based Containers-as-a-Service platform, but it’s more common to see containers deployed on generic cloud infrastructure or on-premises, where toolset compatibility is less restrictive.
The Bottom Line
In short, containers and serverless are similar in several key respects and the strategies you use to manage them and keep them secure should be similar, too. However, there are some very important differences when it comes to managing and securing certain dimensions of a serverless or containerized workload, such as the extent of the responsibility you bear for the host environment and the tools you can use.
In a simple world, your container and serverless strategies could be identical, but in the real world, you have to factor these variations in when you make a plan for keeping your serverless functions and containers lean, mean and secure.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.