Six Security Considerations for Serverless Environments
Many enterprises have adopted functions-as-a-service (FaaS), or serverless, as part of their cloud architectures, following its introduction in 2014 by AWS Lambda. Since then, other major cloud providers have also announced their own serverless offerings, including Azure Functions and Google Functions.
The rapid adoption of serverless infrastructure is largely due to its ability to offload infrastructure management from application developers to cloud providers. This allows developers to save time and cost previously spent on back-end operations and program coding, resulting in more efficient infrastructure utilization. However, serverless’ main benefit — offloading operational duties for developers — also creates one of its biggest risks: lack of ownership, visibility and security within these environments.
The traditional shared responsibility model states that cloud providers are responsible for security of the cloud, and customers are responsible for the security of services in the cloud. Serverless computing shifts this shared responsibility model, putting the majority of the security responsibility for these services back in the hands of the cloud provider since it is extending the cloud infrastructure and handling the back-end operations. However, this shift in operational ownership raises interest around dedicated solutions that provide additional security and visibility into these hidden environments.
Swim, Don’t Sink with Serverless
When developing in a serverless architecture, the change in responsibilities of the developer can be daunting and problematic for some organizations. As serverless continues to enjoy rapid growth and adoption, security needs to remain a key concern, so businesses don’t fall victim to the blind spots these new types of services introduce.
Following the best practices for serverless security (and cloud security, in general) during the implementation stage will help your team swim instead of sink when starting to work with functions. The result is operational compliance and an efficient and safe workload, leaving your teams to focus solely on the fun parts of actually writing code rather than dealing with the boring operational requirements. While the nature of serverless environments is ephemeral and very dynamic, serverless users should remember these best practices to ensure safety in these new environments.
Serverless Security Best Practices
The idea behind serverless, for which the cloud provider is responsible for providing the compute infrastructure, introduces a set of best practices that developers should be aware of, in order to enjoy an automated scalability and secured environments at the same time.
Here are the best practices we recommend:
- Build Function-Level Segmentation using IAM policies.
Whether in-function runtime protection is applied or not, continuous assessment of the privileges associated with function defines the function blast radius — while controlling that radius is a must. This requires determining which resources a function needs to access and assign IAM policies. These IAM policies, of course, help you segment and gate to other resources a function can access, and what operations the function can apply to those resources (such as read, write and delete).
The ability to fully control Internet egress traffic from your functions becomes impossible unless you run your functions inside a virtual private cloud (VPC). If one of the functions is compromised, there is a good chance an attacker will try to extract sensitive data from it. In serverless environments, it is thus important to continuously monitor functions as they are deployed in order to avoid unusual activity and track the flow of traffic between your networks running on serverless.
- Manage Credentials and Secrets Effectively and Safely
Serverless functions consume credentials to invoke other services. When these are other cloud provider hosted resources, using IAM roles is the go-to approach for assigning privileges to functions. However, there are use cases requiring long-term secrets for third-party services or cross-account integrations, including how maintaining permanent credentials can pose security risks in a serverless environment. To avoid these risks and stay in compliance, all of the credentials within your function codes should be temporary. If for some reason your function does require the use of a long-lived secret, encrypt your secrets. Use the cloud provider’s key management service to manage, maintain and retrieve these secrets automatically.
Each serverless provider offers integrated tools for managing secrets and account access. If the types of secret management tools offered by your serverless environment are not appropriate or applicable to your specific function or task, follow these general best practices when handling secrets manually:
- Secrets should exist solely in memory;
- No secrets should be recorded into logs files, storage or manually;
- For added security, develop code that manages your secrets for you;
- Scan code for accidental commits of secrets.
- Secure Your VPC
If your serverless environment requires access to a VPC, you should control those environments through the principle of minimal privilege, a common best practice for network security that requires only assigning users the minimal level of access that is essential for them to perform their intended functions and to access the associated resources those functions require. Additionally, it is important for users to understand that controlling the VPC with the principle of least privilege can affect the way high-level serverless functions connect to and affect their subordinate functions.
- Automate Code Changes and Deployment
Integration/continuous delivery processes begin within your serverless architecture to ensure a seamless distribution of the new code throughout the entire function. Automation forces the deployment to go through well-defined ceremonies, thus minimizing human error while regulating code deployment. Ceremonies should include application vulnerability scanning, secret scans, static code analysis and pre-flight tests.
- Runtime Anomaly Detection
Whether a pre-production staging environment is able to sufficiently offer profile functions and establish baselines and whether production dynamic profiles are required, enhancing security defenses with anomaly detection offers an additional layer to the above.
Evaluating anomaly detection engines should start with understanding which signals the anomaly engine collects: including full runtime in-function monitoring, cloud provider API access logs, such as CloudTrail, and network access logs, such as VPC flow logs.
- Incident and Response Workflow
Integrate your security tool stack with the DevOps workflows. If DevOps and Site Reliability Engineering (SRE)are the first response tier to a security incident, ensure that the entire detection and prevention capabilities are communicated to DevOps/SecOps channels in addition to audit trails for compliance mandates.
Effective practice helps to minimize the mean time to response and resolution by connecting the correct stakeholders early on in an event with the high-resolution data about the incident.
Go Serverless, Fearlessly
Although it may seem on the surface that the entire responsibility for securing serverless environments rests in the hands of your cloud provider, the adoption of the new serverless shared responsibility model means that previously reasonable assumption is no longer valid. While cloud providers are responsible for much of the security ownership in these environments, big chunks of that responsibility is still in the hands of the customer. Following serverless best practices will better protect developers and security teams and increase overall security posture.
At Alcide, we just announced a new release of our platform that touts serverless support for AWS Lambda by extending your infrastructure and network visibility and control. Using our platform, whether you have AWS Lambda in your VPC or fully hosted on AWS Lambda servers, you can take back control of your serverless environments to ensure that the functions are invoked in a secured manner, — which goes hand in hand with the rest of the security controls of your entire cloud infrastructure.