Love Serverless, Remember Security
Serverless offers a great way to save IT resources, and thus, money by delegating the cost of server hosting and management to a third-party provider.
However, setting and then forgetting security management for serverless data once it is in the hands of a service provider is something no organization wants to do. This is especially the case when relying on multicloud providers for serverless platforms shared by many different sources within an organization.
“Security is a key concern for any next-generation applications, and cannot be an afterthought,” especially for serverless platforms, Holger Mueller, an analyst for Constellation Research, said, “As such, security is always a challenge, and is already hard enough to achieve in a single physical system.”
Know Thy Server
Serverless, despite its name, has everything to with servers. The difference is that a third party is performing all of the associated management tasks for you. These third parties typically offer “functions as a service” (FaaS) that run on containers. “Backend as a service” (BaaS) also falls under the guise of serverless offerings.
Among the available serverless offerings, Lambda by Amazon Web Services (AWS) is one of the leading offerings. Other alternatives include Cloud Functions (Google), IronWorker (Iron.io), Manta Functions (Joyent), OpenWhisk (IBM), PubNub BLOCKS (PubNub) and Serverless Docker (Docker).
When migrating serverless data to one of these or other service providers’ platforms, this means someone else outside of your organization will, of course, manage many of the security tasks. The third party, among other things, is responsible for downloading and installing security patches and updates. Serverless providers also typically offer logging and monitoring services as part of their packages, which Amazon provides under the umbrella of Amazon CloudWatch.
The good news is that, due to the containerization of the platform, organizations also benefit from protections stateless applications offer, thanks to their ephemeral nature. Malicious code, for example, will obviously not run when the containers are not in use and the applications are powered down. Denial of service attacks (DoS) are especially hard to do when the available capacity the third-party offers is scaled up automatically based on demand.
However, at the end of the day, you still must rely on a third party for a lot of security management that remains out of your organization’s control.
“Security gets trickier when it comes to cloud and serverless deployments, as organizations have to rely on third parties to provide the security needed for the next-gen applications. As long as the security capabilities of a serverless platform are a fit, this can even be a time and cost saver. But if it doesn’t fit or must be changed in the future, this all becomes a massive issue,” Mueller said. “This challenge gets even bigger when it comes to multicloud deployment of next-gen apps, as each cloud provides different level, features and capabilities to secure data and applications. Multicloud deployments can quickly become a nightmare.”
Logging services generally on offer by third-party serverless providers obviously offer some data protection — but can be lacking in many cases.
“No organization is immune to security vulnerability, which makes logging such issues a key to preventing disaster,” Albert Qian, product marketing manager for the Cloud Academy, said. “But while software solutions do have the capability of shutting down a security threat at the source, this is beyond the scope of many serverless platforms, which are primarily focused on logging more general state changes.”
The granular access developers have to their code is also paradoxically associated with another serverless security limitation, Qian said.
“Serverless means increased granularity over code, which allows developers to state what data is allowed to be accessed. However, increased controls and user access management become a hindrance at scale,” Qian said. “Such tasks cost labor time and can take away from important functions involving innovation. They also likely increase such risks depending on the vertical, especially in areas such as healthcare and financial services.”
DevOps teams will obviously take migration to a serverless platform seriously and the process will invariably involve ways to mitigate risk. Specific to security, there are specific boxes to check off. Among them, Rani Osnat, vice president of marketing for Aqua Security, recommends tight monitoring of potentially vulnerable code from the outset as soon as the migration to a serverless platform is made.
“The best approach is to prevent vulnerable functions in the first place by scanning them for known vulnerabilities as part of the pipeline,” Osnat said. “However, it may be the case that functions that were already deemed safe and deployed become vulnerable due to a newly discovered vulnerability. It’s thus recommended to scan them on an ongoing basis, such as once a day, [in order to remain up to date].”
It is also a good idea to implement pipeline controls that prevent the deployment of vulnerable code and data, or at the very least, to set up automated alerts sent to the DevOps team, such as via Slack or Jira, when and if vulnerabilities are detected, Osnat said.
“In order to block functions in the runtime environment, you will most likely need to inject security controls as code into your functions, since it’s not currently possible to use cloud provider application programming interfaces (APIs) for that and there’s no real standardization around FaaS runtimes,” Osnat said.
Unfortunately, standardized tools also do not yet exist commercially that will allow for detailed behavioral monitoring except for code injection mentioned above, Osnat said. However, it is possible to “prevent certain types of abuse simply by performance monitoring your functions,” Osnat said. “This is because behaviors like extended activity of a function (which would typically run for a short time), or abnormal CPU usage, may indicate that something has gone awry.”
The Hacker Challenge
Legions of hackers, including those who want to steal data or who just try to penetrate platforms for testing or even for fun, are invariably seeking to discover how serverless data might be compromised. As mentioned above, serverless platforms still run on servers and thus can have the same kind of vulnerabilities associated with on-premise server management. The end result is that serverless security addresses many of the same underlying concerns associated with new platforms, such as Kubernetes, as well as with traditional bare-metal servers managed in a data center environment.
“The way modern applications are built often include dependencies on other libraries, of which also depend on additional libraries,” Qian said. “Unknown and known vulnerabilities in server libraries [will also always] attract hackers, who will always seek to inflict damage and cause harm, including data breaches,” whether it is traditional servers or serverless platforms that are left exposed.
Aqua Security is a sponsor of The New Stack.
Feature image via Pixabay.