Modal Title
Security

Cloud-Native Security Patching with DevOps Best Practices

May 2nd, 2018 8:52am by
Featued image for: Cloud-Native Security Patching with DevOps Best Practices
Feature image via Pixabay.

Liz Rice
Liz Rice is the technical evangelist at container security specialists Aqua Security. Prior to that she was CEO of Microscaling Systems and one of the developers of MicroBadger, the tool for managing container metadata.

In a traditional deployment, a key responsibility for the security team is making sure that the servers are up-to-date with the latest in security patches. So at first glance, a cloud-native deployment could look like a nightmare to a security professional: thousands of containers, each with their own versions of different operating system files, packages and executables. Doesn’t this multiply the patching problem by orders of magnitude?

Fortunately, some of the main tenets of today’s best practices in DevOps can really help. Let’s see how immutable artifacts and automation simplify the process — and increase the effectiveness — of keeping a cloud-native deployment properly patched.

Container Images Are Immutable

In today’s world, the basic unit of deployment is a container image. Once you build a container image, it can’t be changed; if you want to update the image you need to rebuild a new version.

When you start a container running, it’s instantiated from a container image, with the file system starting up a duplicate of that image’s contents. It’s certainly possible — in theory — to treat that container as if it were a server in the old-fashioned way of doing things: You could set things up so you could SSH into a container, and apply patches to it. But it’s a much, much better idea to build a new image with the patches and restart the container, for several reasons:

  • Build once, and run as many instances as you want. You don’t need to patch each container individually; you only need to rebuild the image once, including the patched version of any packages that need to be updated, and then you can re-deploy the same code to all your container instances. Kubernetes and other orchestrators make it easy to do this with rolling upgrades.
  • It’s hard to keep a reliable record when you patch servers (or containers) manually. In contrast, your Dockerfiles give you a record of how each container image was built. You may need to combine this with container image scanning to get a precise picture of exactly which package versions were included, but again you only need to scan an image once to have a complete picture of what’s included in all your running containers. As a corollary, you can be confident that all the containers running from the same image are using precisely the same versions of every package in that image. This can be a huge help when it comes to debugging problems in the deployed code.
  • In a very large majority of cases, you don’t need the ability to access containers through tools like SSH – so you can avoid including these things in the container image in the first place, and dramatically reduce the attack surface of your containers.

It’s possible to write container code that installs additional packages into itself at runtime. This is VERY bad practice, as you can’t know until instantiation time what software you’re going to be running, and you even end up with containers started from the same image, but not executing the same code! This is a terrible idea from the security perspective of ensuring you’re not running with vulnerabilities.

As a consequence of not including unnecessary tools and packages in your container images, you’re likely to have fewer instances of code that needs patching when new vulnerabilities are found, simply because your images are less likely to include the affected code. But we still have to consider the question of how to identify which of your images need to be patched, and when. And before we do that, let’s take a little time to talk about how vulnerabilities are identified.

Lifecycle of a Patch

Security researchers around the world work day-in, day-out looking for new ways in which existing code might be abused to create unexpected side effects. For example, in the recent case of Meltdown and Spectre, it became evident that it was possible to exploit speculative execution — a feature designed to improve performance — to access kernel memory that wasn’t supposed to be accessible. There are standard schemes for identifying these exploits, such as the CVE identifiers that are used in the National Vulnerability Database.

Once a description of an exploit is published, this helps the community at large know that there is a risk of being attacked through that exploit, but it also alerts the bad guys to a new potential attack vector that they can use. If the exploit is serious, security teams will agree to keep details under wraps for a certain period of time to give the code vendor (or author) time to release a fix first.

Security teams will figure out exactly which versions of code packages are affected; at the same time, the vendor works to release a new version of the code with modifications that mean the exploit is no longer possible. A new package with this modification is released.

Then it becomes a race to deploy this patch package before an attacker gets to your deployment!

Image Scanning

Image scanners look at the software packages that are included in your container image. They cross-check these packages with vulnerability databases (like the NVD) where they can look up which CVEs, if any, are present in this precise set of packages. There are several scanners available, including free solutions like CoreOS’s Clair, and Aqua Security’s newly-released MicroScanner, as well as paid, enterprise-capable solutions.

Detecting vulnerabilities sounds straightforward enough, but in practice there are complications. For example, package A might include a vulnerability that can only be exploited if package B is present, or if a particular type of network protocol is used. If those circumstances don’t apply to you, you won’t want to apply the patched version of package A, particularly if the fix isn’t compatible with your application code. Also, different Linux distributions may backport fixes to older versions of the package. The NVD doesn’t track these back-ported fixes in all the different distributions, so relying on NVD data alone can lead to a lot of false positives. The ability to handle false positives effectively is a key differentiator between the different image scanners.

We saw above how using container images, and re-building a new image whenever a code change is needed, can reduce the scale of the patching problem and improve the security posture at the same time. Now let’s see how automation — in particular, CI/CD pipelines — can help even further.

Automated Vulnerability Detection

We’ve considered re-building container images to apply patches. You could do this manually, but the vast majority of cloud-native code is built using continuous integration tools like Jenkins, Travis, CodeShip or GitLab. The cloud-native approach to security scanning involves including it as a step in this continuous integration pipeline.

With image scanning included in your CI pipeline, you can automatically check for the introduction of any new vulnerabilities with every code change. You can automatically fail the build if, for example, a high severity issue is detected.

Running image scanning on your deployed container images on a regular basis allows you to get alerts when a new vulnerability has been found that affects your code. And as you’re using immutable container images as the basis for your containers, there is no need to scan the containers themselves. This can save a huge amount of time and resources, as you only have to re-scan an image rather than the thousands of running containers that were spawned from that image. When you find an affected container image, you can rebuild it with the update and re-deploy all the affected running containers.

In traditional deployments, security patching is typically done by a security or ops team at a stage well past where the development team is involved. With image scanning included in the pipeline, developers become invested in using the appropriate versions of base images, packages and libraries. Security “shifts left” and becomes a shared responsibility rather than being siloed entirely into a different team.

More Effective Security

We’ve seen how using containers based on immutable images allows us to efficiently update deployments with new code, and gives us the potential to reduce the container attack surface. Vulnerability detection can easily be included in the build pipeline through image scanning, so that we can be alerted whenever an updated image is required for security purposes. Shifting security left towards developers gives them more awareness of potential issues, and stops security from being treated as an afterthought. In my view, these reasons all contribute to making cloud-native deployments more secure than their traditional equivalents.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Aqua Security.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.