Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
Containers / Security

Docker Security Fundamentals and Best Practices

Nov 12th, 2019 3:00pm by
Featued image for: Docker Security Fundamentals and Best Practices

Docker. You know it. You use it. You might well be confused by it. Is it Docker? Or is it docker? And why has Docker (or docker) taken a back seat to Kubernetes? There are so many questions regarding this technology. But one question that has been on the hearts and minds of many a container administrator is security.

You might think that, given the isolated nature of containers, they’d be harmless. On certain levels that is the case. If a container is deployed correctly, it cannot access the deploying ecosystem. In theory. But, as we’ve all discovered in the realm of technology, where there’s a will, there’s a way.

Docker Security Fundamentals and Best Practices

Subscribe on Simplecast | | Pocket Casts | Stitcher | Apple Podcasts | Overcast | Spotify | TuneIn

Although you might fully understand how to deploy a containerized application and scale it out to meet the needs of your company, are you taking the necessary steps to ensure that application, and the hosting environment, is as secure as possible?

What can you do?

Security Begins at the Image

The biggest issue facing container security is the images those containers are based upon. According to Scott McCarty, principal product manager for containers at Red Hat, “Since developers are now in charge of starting with a container base image, downloading and using libraries, and adding their own code, they essentially have a converged supply chain that may have vulnerabilities at any part of the stack which touches the network.” That supply chain begins with the image, to which McCarty says that developers should “always start with a rock-solid base image.” Listen to the full interview with McCarty on The New Stack Makers podcast, above.

What does that mean? Simple. Only use official images from known entities. You must remember that images contain code from the internet and code created by a developer (or group of developers). If you use an official Red Hat image (from the Red Hat Container Catalog), you can be sure that image is safe. Why? Because it has the backing of a known company behind it. Those images have been vetted. On the contrary, an unofficial image (downloaded from an unofficial register) could have malicious code tucked inside, waiting to be unleashed on your network or clients.

Take for instance the CentOS images found on Docker Hub. The first listed is the official CentOS image. That image is clearly tagged as official (Figure 1).

Figure 1: The official CentOS image on Docker Hub.

However, if you look below that official image, you’ll find a number of unofficial images that are ready for pulling. You have no guarantee those images do not contain malicious code. If security is at the top of your list of must-haves (and it should be), never pull anything but an official image from a registry and never use a registry that is not a known commodity.

If you are a developer, working to craft an image that will serve as the base for your containers, there are best practices you can follow to ensure those images are better secured. According to McCarty, “…developers should split their applications up into code, configuration and data.” The code (binaries such as web servers, Python, and Java) should live within the container image. The configuration and data should come from the environment (such as information passed via a .env file).

“For example, development clusters should use development passwords, while production clusters should use production passwords (including cryptographic keys),” McCarty continues. “Passwords, certificates, tokens, API keys, and other secrets can be offloaded to external stores with Kubernetes.”

In other words, don’t pack everything into a single image.


In order to gain as much security as possible, container administrators must understand Role-Based Access Controls (RBAC). These access controls determine if a user is allowed to perform an action within a container or project. RBAC is similar to ACLs within the Linux operating system. With regards to containers, McCarty says, RBAC “…allows developers to use local roles and bindings to control who has access to their projects.” It is important to know, however, that “…authorization is a separate step from authentication, which is more about determining the identity of who is taking the action.”

When working with containers, you give developers access to clusters to define things like routes, DNS names, component communication (such as client to server, service mesh, and microservices). When a developer has access to these features, it means they also have access to the Kubernetes APIs. Without RBAC in place, those developers have absolute control and could, if they were so inclined, wreak havoc on container security. With RBAC in place, the damage from either malicious intent or accident is limited.

By employing RBAC it becomes possible to create very granular permissions for developers. You can create teams, such as DevOps, developers, alpha, and beta — each of which has different permissions (such as view only, restricted control, full control, and admin), and then assign users to teams. When a user is assigned to a team, the associated access will apply.

Anthony Woods, co-founder and CTO of Grafana Labs starts his discussion of RBAC by saying, “Principles of least privilege is a well-established approach to security management. In line with this approach, Role-Based Access Control (RBAC) provides a mechanism to limit an individual’s access to a system to only what they need based on their role in the system.” He then describes three key areas that RBAC needs to cover to maintain a secure environment. Those keys are:

  • Resource and privilege levels for containers. Woods says, “Policies need to be in place to limit the types of container configurations that users are allowed to deploy into an environment.” Woods also warns that “applications being deployed by development teams should only have limited access to resources and should run with the least privileges needed.”
  • Specific containers users should be able to see. To this point, Woods believes that although “a single Docker host my be running containers deployed by multiple teams, controls are needed to limit users to only being able to see the containers for their team.”
  • The types of actions users can perform on containers. Woods is of the school that there are certain actions that users can perform which have certain security implications.  For example, “creating and deleting containers, viewing logs or being able to execute commands inside running containers. Not all users need to perform all of these actions, and so policies should be put in place to restrict the actions that users can perform.”

Container Security Best Practices

Within every piece of technology, there are best practices to consider. Containers are not immune. In fact, when deploying containers there is a multitude of factors to consider. McCarty starts his best practices summation on a fundamental level by saying, “First and foremost they should realize that containers are just fancy files at rest (literally tar files), and fancy processes while running (simply Linux processes with extra security enabled).” McCarty then expands on this with a list of ideas that best practices should be applied to: “These two fundamental concepts will allow security administrators to apply standard best practices around confidentiality, availability, integrity, non-repudiation (signing), defense-in-depth, and tenancy.”

The biggest concern (for McCarty) is that too many administrators want the easy way out with a checklist, he said, “when they should be looking for ways to connect the concepts they already know to the newer model and technology we are implementing with containers and things like Kubernetes.”

Woods offers up a to-the-point bulleted list of best practices, which includes:

  • Use an orchestration platform (such as Kubernetes) that will provide the tools for workload control and how users can interact with containers.
  • Follow the principle of “least privileges,” where a user’s level of access is limited to only what they require and no more.
  • Use vulnerability scanners to check for known issues in images.
  • Build containers as a part of your CI/CD pipeline and make sure to update all dependent layers.
  • Use change management processes to make sure only approved containers are deployed.

Containers have become crucial to helping businesses to run efficiently, reliably, and with heretofore unknown agility. But keeping those containers secure, so your company doesn’t wind up with a nightmare on its hands, takes a bit of work. With the right amount of planning and diligent administration, your container deployments can enjoy the same level of security as the rest of your business.

Red Hat OpenShift is a sponsor of The New Stack.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.