Twistlock sponsored this post.
Just over 30 years ago, virtualization was only available to those with mainframes and large minicomputers, while security concerns were purely physical. Twenty years ago, VMware was releasing its first product, and network perimeter security was in its infancy, relying on firewalls. Twelve years ago, AWS launched, and network security became a concern. Five years ago, containers went mainstream thanks to Docker, and host security came into focus. Today, with the growth in serverless security, application-level security has finally come under the full scrutiny that compute and network layers have been living with for years.
With application, compute, and network security all being audited, there is increased visibility of security concerns to both management and clients through reports like SOC type 2. With this increased transparency to clients, security professionals are the key to making sure the assets being deployed to production have a solid security profile. The size of this profile can increase drastically based on the type of deployment that is being used.
That’s why it’s important to understand the security nuances between different types of emerging deployment technologies, namely, containers, serverless computing and virtual machines. Below, we compare and contrast their security aspects of:
First up, let’s address serverless security, since a serverless app is typically purely code that executes a single function — hence the name function-as-a-service. The actual platform you deploy on is indifferent to the most common security problems that occur within a serverless app.
Besides following secure coding best practices — like only returning the data that is absolutely required to process the request and having the app use service accounts which only have the access required to allow it to do its job — any vulnerability that is discovered will lead to data being leaked that can go far beyond the scope of the serverless app — which can lead to a publicity nightmare.
The other main area of concern is any third-party libraries that are included inside the app to provide enhanced functionality and save the development team development time. Examples of third-party libraries are everything from libraries used to validate a phone number or postal code to client libraries like JDBC drivers, which are needed to connect to an external PostgreSQL database. WIthout using a scanning tool that self-updates and routinely scans your built artifacts, it is a huge manual effort to constantly stay on top of all the third-party libraries that are used within an organization and to watch all the various vulnerability announcement lists.
Since, in essence, a serverless application is often running in containers behind the scenes, it make sense that containers will carry all the same concerns as serverless, plus new concerns around the additional functionality that containers offer to a developer.
Container-specific security concerns can be reduced to two distinct areas: the trustworthiness of the source for the container on which you are basing your deployment, and the level of access the container has to the host operating system.
When running a container on any host, whether Windows or Linux, the container should not be run with root or administrator privileges. Using features like namespaces and volumes instead of raw disk access allows these container daemons to share storage for persistent data between one or more containers without needing the container itself to have escalated permissions. There are even projects, such as Google’s gVisor, which go a step further and hide all but the exact system calls a container needs to run.
The larger concern with containers is the trustworthiness of the layers on which the container is built. There are multiple ways to address this. They include pointing to a specific version that you have tested and are sure of, instead of relying on the latest tag. You can also expand the scope of any scanning you have in place for third-party libraries in your serverless apps in order to scan entire containers for known vulnerabilities. This scanning can be either performed ahead of time in the source registry, or during the build process as you use them as a base to build on.
Virtual Machine Security
Virtual machines are yet another superset of concerns that need to be addressed. There are books and best practices guides that go back decades on how to secure an operating system. The U.S. National Institute of Standards and Technology (NIST) maintains a series of checklists for application and OS security. These are reasonable security profiles but they can always be improved.
One way to improve them is to limit running services to what is absolutely required. For example, a default HTTP server is nice for viewing logs, but is it required when your app is running in Java, and there are products available that can connect via SSH, and consolidate logs centrally?
Another option is to apply patches as soon as possible after their release. Some patches are released monthly. There is also Microsoft’s “Patch Tuesday,” while other, more critical patches are released the day there is a fix available (these are referred to as out-of-band patches). Unlike containers and serverless, the odds of needing to apply any given patch are much higher on a full virtual machine, as there are far more packages required and installed.
By knowing what type of computing environment on which you and your development teams are deploying applications, you have the best chance to apply all security best practices. Ideally, each application in your portfolio can and will be assessed, and you’ll be encouraged to use the most appropriate and streamlined deployment option available. By moving more applications to containers, and going serverless where appropriate, it will enable production-like security practices to be enforced much earlier in the development cycle and will ultimately improve your overall security profile.