Rethinking Infrastructure, an Interview With Docker’s Security Director
Docker’s emphasis on the customer’s experience is a hallmark of its stated mission to create tools of mass innovation. That philosophy is directly relevant to Docker 1.12, which integrates clustering and orchestration into the Docker Engine itself.
Nathan McCauley is director of security for Docker. Earlier this summer, McCauley spoke with Lee Calcote, one of the authors of our latest EBook: Networking, Security, and Storage with Docker and Containers.
Docker strikes an interesting pose with its views about the rethinking of infrastructure and what that means. Docker open sources tools it uses itself for managing containers. It’s through the use of its own tools that provides a window into Docker’s microservices approach and how that mirrors McCauley’ corollary to making it his goal at Docker to increase security of everyone running their infrastructure.
Docker focuses on securing resources and building automation into its platform. In part to achieve this user objective, Docker 1.12 has a process of using transport layer security (TLS) to encrypt every node on the network that is part of that particular deployment of Docker Swarm. It is Swarm that clusters and orchestrates the Docker containers.
The conversation can also be heard on YouTube.
Docker 1.12 introduces cryptographic node identity. Naming conventions in containers has in the past led to frustration among developers, who often are unable to quickly locate and address problem containers due to a lack of clearly identified containers or components when running applications at scale. Docker’s cryptographic nodes address these issues by introducing a public key infrastructure (PKI) along with a TLS identity for each node.
“Once every node has a TLS identity, you can build really interesting stuff like automatic encryption between all of the communication between the nodes and the cluster. Based on that, you can make decisions about what workloads will [or] won’t run on which hosts,” McCauley noted. This allows for DevOps teams to further encrypt their system by creating new policies based off of individual node identity.
While many developer teams are making the shift toward adopting a container-based infrastructure, they also have to understand how container security is approached in Docker itself.
“What many organizations are seeing is a fundamental shift to a DevOps mentality. With that DevOps mentality, they’re realizing that they need kind of a lever to implement security controls,” said McCauley, adding that, “so many organizations are seeing it as a way to have security be part of the new process where prior to DevOps, it was more of a human-oriented process. They’re now realizing that the tooling that can come along with Docker and containers allows them to have a lot of the same kinds of controls they want and need.”
Having these security features available out of the box has led to a variety of discussion regarding whether opting into these features should be a choice developers can make. In striking a comparison to Apple’s enforced security features that come standard on their devices, McCauley explains that “In the cases where it is possible to just do the right thing, we kind of feel like there is no reason to have configure ability there. Just build it in, build it by default.”
Ultimately, while many large enterprises have the option to recruit large numbers of security-focused developers and operations employees, McCauley implored that Docker is also focused on helping smaller teams achieve their goals. “When we think about how we build things as much as we can, we don’t want to have our products have any sharp edges where you can get cut by doing something incorrectly. We want to help folks get around that.”
Docker is a sponsor of The New Stack.
Feature image by Massimo Mancini via Unsplash.