Security in Serverless: What Gets Better, What Gets Worse?
The emerging serverless computing architecture alleviates several server-oriented security risks, but it also requires new threat analysis and prevention, asserted Snyk CEO and co-founder Guy Podjarny said at the Serverlessconf conference in Austin recently.
In his presentation, Podjarny broke serverless security threats into three categories: Those threats diminished (but still present) in serverless environments, threats that remain the same, and new risks that come from not having to manage servers.
1. Reduced Risks at the Server Level
Vulnerable Operating System Dependencies
With serverless, it is no longer necessary to maintain patches and secure servers independently. Since the Shellshock vulnerability in 2014, some operating systems introduced automatic patch updates, although many IT system administrators in enterprises are still wary of allowing their operating systems to make those decisions themselves. That may mean that some vulnerabilities go unnoticed.
In a serverless environment, that risk is removed. “the majority of successful exploits are because patches have not been updated, which is a management problem, not a technical problem,” Podjarny pointed out.
Solution: Podjarny says to choose a serverless platform that you can trust to maintain and secure servers with keeping patches up to date.
Denial of Service
In serverless, denial of service attacks are less likely. “there is no longer a long standing server to take down,” said Podjarny. While circular queries or high numbers of API calls may be removed as a denial of service threat, Podjarny does warn that in serverless workflows, there are still executions of functions that occur and technically, it may be possible for an outsider to make concurrent execution calls to your workflow which can end up with creating a “billing DoS” where you are required to pay for the number of executions that have occurred in your serverless system.
Solution: See permissions below for how to better manage this billing DoS risk. Check the concurrent execution limits of your serverless provider and see if this was reached as maximum, is that a cost you can live with?
Long-lived Compromised Servers
Podjarny explained that in the majority of cases, a security attack is not an isolated event. What usually happens, he said, is that an attacker may install an agent on a server and leave that there. Once it has gone unnoticed for a little while, it then introduces more security code to advance another step, and so on until the organization is compromised.
In serverless, this is less likely to happen as it is a stateless architecture system. After each instance of compute processing, the server is reset so the agent cannot take hold and keep advancing.
However, one of the latent risks in serverless is that when a process starts up, containers need to be created, then the process needs to be run and then the container is shut down again. Services like OpenWhisk are trying to reduce that. “The first invocation of a Docker container from cold takes about 300 milliseconds,” said Jason McGee, vice president of IBM Cloud Platform. “The next time you do an invocation in OpenWhisk, we can reuse that existing, pre-warmed container.” This is the sort of situation where Podjarny is pointing out could allow those agents to keep advancing.
Solution: Check whether your serverless platform reuses containers and what security analysis is done on any pre-warmed containers.
2. Serverless Security Risks: Same as With Servers
So while serverless dramatically reduces some top-level threats, there are some areas where the situation is the same, with or without servers.
Best security practices in managing permissions policies, securing data at rest and assessing vulnerabilities at the application layer are the same with or without servers.
“Who can invoke your functions? Who can access code for your functions? If your function was compromised, what could it do?” Asked Podjarny. This is where the “billing DoS” risk is greatest. In cases where the service is maintained, costs from a massive execution of functions may be passed on to the serverless architect. In cases where the service is shut down, it may be because of unpaid billing or from having reached the upper limit of a paid account from the excessive executions implemented.
For example, in AWS, it may be tempting to place a singular permissions policy at the API Gateway. Podjarny suggested instead that policies should be at the function level.
Solution: “Each policy should have small permissions that explain what can be done,” suggested Podjarny. While permission creep is fairly common, it is best to start with narrow and granular permission policies for each function, although, as Podjarny pointed out “few people apply this level of best practice.”
Securing Data at Rest
In a serverless architecture, state is stored outside the server. In other words, sensitive data is stored on a machine somewhere. Again granular permissions can be set around who can access this data and for what reasons.
Solution: Podjarny recommended to encrypt all sensitive data and use separate database credentials per function. While ideally you would also monitor which functions are accessing what data, the tools available to do this are “fairly light” at present in the serverless ecosystem, said Podjarny.
Vulnerabilities in Code and App Dependencies
Perhaps one of the biggest threats in serverless environments comes from app dependencies and from code, both the proprietary code that the serverless architect writes and that contained in third party services (see below).
Serverless doesn’t protect the application layer, Podjarny urged. So introducing security into continuous integration and deployment workflows is essential. Products like Podjarny’s Snyk offer tools that manage the risk of app dependencies. Snyk is building a database of all known vulnerabilities in dependencies, drawing on community research and their own work in surfacing unknown vulnerabilities. Their workflow is to look for vulnerabilities in packages, then go through a responsible disclosure process with the vendor or code creator and then release the details of the vulnerabilities so that their users are alerted to risks in their serverless workflows. Snyk began with addressing vulnerable dependencies during development time, and much of this experience is being brought to the serverless realm.
“From a security perspective, we blindly consume open source code,” said Podjarny. “There are no tools to track what you are using and there are problems with it, you can’t understand what happens when this code is running in production.” Podjarny says that in most cases, certain practices have evolved around managing server dependencies through the operating system, but in serverless, that custodian no longer exists.
Solution: Create an inventory of all of your app dependencies and use a monitoring tool to maintain awareness of any known vulnerabilities.
3. Increased Security Risks in Serverless
Of course, Podjarny points out, if attackers have one avenue cut off they don’t just shrug their shoulders and move on. “When you eliminate an attack vector, then attackers look for another way to attack,” he said. So while serverless doesn’t necessarily create new security problems, it can elevate some areas of concern. Here are a few:
In a serverless workflow, it is more likely to be using a range of this party services. When using third party APIs and functions in a serverless design pattern, Podjarny suggests asking: “What data are you sharing and how well is that third party service protecting it? Is data in transit secured?” Podjarny says while it is common to use an API key for example to access a third party service, it is often less likely to consider how that service authenticates to your system. “Are you validating the HTTPS certificate?” Podjarny asks.
Solution: Podjarny says you must ask questions around if you trust the responses received from third party calls, you need to check whether vulnerabilities can enter your workflows via third parties as a backdoor. “Always use a key management service and not a GitHub repo to manage your secrets and your API keys,” Podjarny recommended.
“Serverless is more granular and more flexible, but this creates more opportunities for attackers to do things you didn’t intend to allow with that additional flexibility,” said Podjarny. Basically, in a serverless architecture, there is no perimeter.
Ideally, there would be tools to monitor both individuals functions and full serverless workflows, but tooling in this space hasn’t fully matured yet, said Podjarny.
Solution: Podjarny says like with permissions, it is necessary to test every function independently for security flaws. Use permission policies to limit access to function. There is always permission creep introduced as systems grow and handle edge cases, but don’t expand functionality beyond what is actually needed.
With the excitement of serverless, it is possible to deploy easily and at minimal costs. So why not create a lot of functions? Podjarny says this often happens in the zeal of getting started with serverless. But each function introduces risk, it is a new opportunity for an attacker to target your system. This is still a management overhead, even in the promised land of “NoOps”. Meanwhile, the tooling system to manage thousands of functions has not evolved.
Solution: Consider carefully any new function before you deploy it. Create separate networks and functions for groups of functions. Track what you have deployed. Podjarny also suggested a chaos monkey-esque approach of reducing current permissions of functions from time to time to see if that impacts on the system efficiency. It may point to areas where permission levels have been set too broadly. Again, keeping an inventory of dependencies and monitoring them is essential in managing bloat.
Concerns that serverless is any less secure than any other architectural approach are unfounded, but threats are real. In many cases, current best practices — especially around using permission policies — still stand, but serverless overall means the attack vector moves substantially from the top, server level to the bottom, application layer. And, unfortunately, that often means a wider surface to manage. Last year at the first Serverlessconf, Ben Kehoe pointed out that security providers have a strong market opportunity in the serverless ecosystem. And this year, as Guy Podjarny showed, that is still very much the case.