Analysis / Contributed / Technology /

Container Security and DevSecOps: The Old Rules No Longer Apply

19 Apr 2017 1:00am, by

Tsvi Korren
Tsvi Korren, CISSP, has been an enterprise IT security professional for 20 years, with background in business process consulting in large organizations. He held various technical and customer facing roles at CA Technologies and Digital Equipment Corporation. Tsvi is currently the director of technical services at Aqua Security, concentrating on building bridges between DevOps and Security.

A common concern of running a container environment is how to ensure that only authorized images can run as containers. Organizations are also challenged with finding a way to implement software and configuration standards in those images, across all product groups and development teams.

In a traditional IT environment, two processes happen in parallel: One is the development of software (“dev.”) The other is preparing the infrastructure on which the software will run (“ops.”) There are established security practices for each of these processes:

  • Software development is secured with static code analysis tools that evaluate source code, check common components for weaknesses, and impose coding standards.
  • Infrastructure setup is secured with server assessment tools that test for versions of operating system and other components, scan for vulnerabilities and patch levels, and impose configuration standards.

These processes run independently and only converge when the newly developed software is installed on the prepared infrastructure. For the most part, if there are problems in the configuration of the infrastructure, they are handled between security and operations with little to no input from development.

While the first set of practices, to secure developed code, is not generally impacted by the transition to applications in containers, the latter set of controls will need to change and adapt to containerized environments.

Containers: New Rules, New Security Processes

With containers, all the operating system components, prerequisite components, and their settings are embedded in the image. Once the image is built and shipped, it should not change. No adjustment of configuration, patching or swapping of components is possible in a running container. The only way to modify the internal environment of an image is to build a new image.

This means that there is only one place where infrastructure security can be implemented: inside the build process. Because once an image is deployed, it is not possible anymore to fix it in place.

Security Needs to Find in Place Again

Embedding infrastructure security controls in the build process is a major, radical change. It literally turns existing security processes on their head.

Instead of waiting until the end of the development and integration process, and then iterating assessment, patching and configuration fixes; security has just one shot at implementing these controls, and it must be done right the first time, when an image is built, before it moves any further along the CI/CD process. The whole idea of “shifting security to the left” — this is what it looks like.

During that time, it is important to execute all the elements of infrastructure security policy, along with policies that are unique to container images:

  • An image is built from an approved base image (template).
  • Server software components have an acceptable level of vulnerability exposure.
  • Server software components are at the minimal required version.
  • The configuration of the image’s operating system is up to organization standards.
  • Image metadata has the required elements, user context setting and entry point settings.

Ideally, these policies should mirror the ones currently used on hosts in the physical, virtual or cloud environment. For example, the list of unacceptable vulnerabilities for images should be very same one that is used to evaluate servers for compliance.

Time for Security to Modernize

For a very long time, security organizations have been concerned with unauthorized changes. Entire practices of jump servers, privileged identity management, administrative logging, change windows, and root cause analysis — are all designed to account for, detect and negate unauthorized changes to software components and their configuration. Continuous vulnerability assessment of hosts, both internal and external is designed, in part, to chase after inevitable changes in the IT infrastructure and measure their impact.

Containerized environments are achieving the seemingly impossible. They are both dynamic and consistent. With containers, there is no need to ever touch a host, since the host carries no meaningful payload or configuration (apart from the container engine.) There is also never a need to change a running container since this change will be overwritten when the orchestration moves or recreates the container. There is never a need to modify an image on a host. In short, there is never a need to change.

Where change does happen, is in the building of new images to replace, augment, or, yes, apply patches to containers that no longer properly serve their intended purpose. And if security plugs into that process, and does it effectively, can seemingly achieve the impossible: create inherently more secure applications, faster and more efficiently than ever before.

Feature image via Pixabay.


A digest of the week’s most important stories & analyses.

View / Add Comments