Container Security and Docker’s Pluggable Architecture

29 Sep 2016 8:56am, by

In this episode of The New Stack Analysts podcast, we delve into the challenges of networking containers, how container namespaces have evolved, and the evolving state of container security today. The discussion also touches on the importance of pluggable ecosystems, and how implementing pluggable models benefits both vendors and users of Docker.

IBM Senior Technical Staff Member of Open Cloud Technologies Phil Estes was interviewed by TNS founder Alex Williams for our latest EBook: Container Networking, Security, and Storage with Docker and Containers.

#105: Bridging Open Source and Container Communities

Listen to all TNS podcasts on Simplecast.

The conversation can also be enjoyed on YouTube.

The conversation kicked off a look into the history of containers. Estes noted that the lower levels of Linux kernel features aren’t usually well-known to the general public, as opposed to platforms like VirtualBox and VMWare. “Docker didn’t come out of the ether. It evolved the compute we think of as a container in Linux. For people coming to that world from something very concrete like a VM, they come back to containers and see that the isolation pieces of why we call it a container really came about over a period of time.”

Estes also noted a positive outlook for the security of these low-level pieces over the next few years, highlighting a contributed article to The New Stack by RackN CEO Rob Hirschfeld that detailed 13 ways that Docker containers are more secure than traditional VMs.

The burden of security does not rest entirely upon container execution layers, Estes explained. Rather, it is shared between the container and its application.

“My application has to do smart things about how I use data, how I pass any kind of encrypted keys, settings, or passwords. As usual, security is a shared task for application developers, and the container community is taking it very seriously,” Estes said.

Pluggable architectures are something the upstream Docker community is focusing on, partly due to the amount of options existing for logging drivers and frameworks. Estes noted that the Docker engine should not be in control of these services, explaining that, “That should be a pluggable component, where if I’m a log service provider I should be able to plug into that service. Because a lot of these areas are fast moving, Docker has chosen to do a pluggable model so vendors can fit those pieces in instead of having the engine trying to keep track of all those technologies.”

The variety of orchestration platforms available to vendors and users today has led to much discussion surrounding how these platforms differentiate based on security. However, Estes noted that at the container processing level they share a lot of similarities and commonality. “We’d be talking more about application level security. In application coordination layers, I think you’d see differentiation where maybe one would be better than others.”

Docker and IBM are sponsors of The New Stack.

Feature image via Pixabay.

This post is part of a larger story we're telling about the state of the container ecosystem

Get the Full Story in the Ebook

Get the Full Story in the Ebook