Analysis / News / Technology /

Docker Incorporates Secrets Management into Swarm to Strengthen Its Datacenter Platform

10 Feb 2017 7:45am, by

In the next step of consolidating features around its Swarm orchestrator, Docker Inc. has added a new security function called secrets management — designed to ensure the integrity of highly distributed applications — to its Docker Swarm orchestrator in the Docker Datacenter platform.

“If you take an application that might be doing something very creative or very useful, and put it in a Docker container, it gets more secure, and it gets more able to defend itself,” asserted Nathan McCauley, Docker’s director of security, in an interview with The New Stack. “By being in the platform, you are arming the application.”

Secrets, in this context, refers to data that is confidential to an application in progress, such as its users’ credentials, its database’s access keys, or configuration details and environment variables describing the platform on which it’s currently running.  Swarm’s upcoming addition, as part of what’s being called Docker Datacenter on 1.13, promises the ability to inject such secrets directly into a containerized application, without being visible to any other process either in transit or at rest.

This type of confidential communication is critical to the integrity highly distributed applications on multi-tenant platforms. Without some sort of secrets mechanism, it becomes theoretically possible for a malicious service to masquerade as a legitimate one. Indeed, some microservices architectures may inadvertently expose applications to this type of tampering — by enabling a running application’s components to be “updated” at will without verification.

Self-integration

Of course, secrets management is nothing new to regular readers of The New Stack, who are probably already familiar with Vault, HashiCorp’s service for enabling confidential data exchanges in distributed systems. We’ve seen indications that Vault has become popular as a secrets manager not just for use with HashiCorp’s own Nomad scheduler, but also for use with Kubernetes — for example, this Apache-licensed Kubernetes + Vault integration project on GitHub.

Docker’s contention is that any such integration of a third-party utility with a platform that lacks the capability natively, by design carries with it the probability of vulnerability and the likelihood of exploitability.

“In order for a platform-wide solution to work — something that works across the board,” said McCauley, “you need to have a solution that materializes both in the platform and in the container. Kind of by definition, the application that’s running in the container is going to need to have an interface to get the secrets. They show up in the container as well as in the platform.”

In a company blog post Thursday, Docker security engineer Ying Li explained how Swarm users will be able to initiate a secret-sharing mechanism between platform and container, using the new command docker secret create. A certificate authority (CA) will already have been spun up by Swarm, for exclusive use within the platform. As McCauley verified for us, this CA will account for the identities of all nodes in Swarm, including the manager node with its Docker daemon, and the worker nodes with their Docker daemons. All transmission takes place over TLS encryption, Ying wrote, and all data at rest is encrypted using the Salsa20Poly1305 256-bit cipher.

Referring to what Docker is calling a “container-native” solution for secrets management, McCauley continued, “What we’re trying to convey is that there are problems with secrets management. This problem set is fundamentally changed by the nature of containers, and thus we’re calling it ‘container-native’ in order to address some of the unique characteristics of a container-based platform.”

Prior to the advent of containerization, he told us, applications were free to assume static configurations — which, before the era of highly distributed systems, would also not have had to be so secret. The rise of containerization, where application components are ephemeral rather than static, brought with it the first wave of secrets management solutions, some of which were merely methodologies rather than products, and none of which he referred to by name. The open source variety of those techniques, he admitted, introduced the world to the need for secrets management.

Nevertheless, McCauley said, “I would summarize them as people hacking together a solution.” A common example he cited was embedding secrets inside their source code, prior to pushing it to a repository — a temporary, though convenient, fix. Of course, a “secret” inside an open source code file, is no secret at all.

“Imagine if you lived in an apartment,” suggested McCauley, “and just by virtue of being a resident of that apartment, you’re able to know everybody’s e-mail password within that complex. That’s what’s passing for secrets management among popular container orchestrators today, and we’re pretty unhappy about that.”

Attack of the Big Blue Whale

The second wave of secrets management, as he characterized it, adds a degree or two of customization, and vary in quality on a scale between “not very secure” and “quite secure.”

“But all of them fundamentally get bolted onto the side of the container platform,” he said.

The wording in the Docker’s press presentation included the following: “Other solutions are bolted on to app platform as an afterthought. Other container orchestrators are insecure and cannot support multiple apps on the same cluster.”

There is evidence with Kubernetes’ to substantiate McCauley’s claims. Kubernetes’ official documentation for apps secrets clearly describes that platform as exposing secrets to an entire pod. It does explain how it can be theoretically possible to limit exposure of secrets to a single container within a pod, though it involves explicitly designing the entire application for partitions: one “which handles user interaction and business logic, but which cannot see the private key,” and the other which acts as the explicit signer.

In a presentation for a Red Hat conference last AprilJerry Jalava, a senior system architect with Google chops for Helinski mobile dev firm Qvik, advised developers against embedding secrets inside source code or in Docker container images. Jalava introduced, secrets as “first-class citizens” in the Kubernetes ecosystem, though as one of the methodology’s “Cons,” he warned that secrets in Kubernetes’ etcd key/value store are stored in plaintext. Thus, he advised, administrators might want to come up with some way to ensure that only authorized individuals gain access to the etcd file.

Not quite one year ago, a Kubernetes contributing engineer was on the record as asserting Kubernetes’ secrets system was efficient enough, so long as one didn’t mind coming up with another security system to hold it together. The problem was there, and presumably still is.

Heterogeneity Reconsidered

But is it the responsibility of the core of every platform to provide every single category of functionality it may require? Or, if it’s truly the center of an ecosystem, then wouldn’t be preferable to enlist the help of second and third parties for rounding out its features list? It’s a debate where Docker once championed the side of third-party contributors, particularly when it began advancing its own plug-in architecture.

Now, at least insofar as security is concerned, Docker Inc. is making the case that functionality is most reliable when the platform provides it for itself.

That may be a unique challenge for HashiCorp, whose Vault secrets service supports both Kubernetes and Docker.

“If you’re living in a pure, 100-percent Docker world, then maybe you can make that argument,” remarked HashiCorp Chief Technology Officer Armon Dadgar, speaking with The New Stack. “For most modern enterprises, a small fraction of their workload is actually running Docker. So the reality, for most people, is not that this is a native, silver-bullet solution. Particularly, it overlooks the fact that multi-data centers are the reality for most large organizations.”

Dadgar told us that his company has many customers in the Fortune 2000 sphere of influence, essentially all of which have legacy applications in many different classes and categories (e.g., Windows Server, WebSphere, CMS, ERP, mainframes, vSphere). Interoperability between all of these categories of applications necessitates a “secrets service” (our phrase, not Dadgar’s) that resides outside of each of these platforms.

Put another way: Integrating Vault into any single one of these platforms would be contrary to the whole point of deploying Vault in the first place. It’s independence, from Dadgar’s perspective, is its principal virtue. Thus, the fact that Vault would render equivalent qualities of service to both Docker and Kubernetes renders any architectural deficiencies either platform may have had, with regard to secrets management, irrelevant.

“If we hyper-focus and hyper-constrain our problem to 100-percent Docker and one data center,” said Dadgar, “then sure, maybe those claims would hold up. The reality of the situation is, there’s always going to be ‘glue’ — always these multiple systems that need to be integrated.”

For Thursday’s announcement, Docker’s McCauley asserted that integrating secrets management into Swarm gives developers a single console from which to conduct access control and policy management. We asked McCauley whether this integration is meant to suggest that developers (the ones who use Docker) may be best suited to manage security policy in this scenario, rather than infosec personnel (the ones who use a security platform).

“The fundamental Docker thesis is that the answer is both,” the security director responded. “A developer is the one who is responsible for defining what secrets need to exist in their application, because they’re the ones who best know it. And if you provide an interface that’s usable for them, they will do a good job of defining that. But then, just as much an important part of that is the IT operations team who can set policies, and take the developers’ decisions about what they need, review them, and deploy them to a cluster. They’re both integral parts of the problem; and if we have a solution that respects the agency of both parts of that, you end up with, overall, a more secure whole.”

As Docker Inc. adopts the position of a platform company, it assumes the stance that was successful for platform companies in the past: Oracle, SAP, IBM, and Microsoft. It’s a competitive field, even in the open source space.  Now, the company with a big blue whale for its logo is looking bigger and bluer.

Feature image: “Digital Orca,” a sculpture by Douglas Coupland outside the Vancouver Convention Centre, taken by 3dpete and licensed under Creative Commons.


A digest of the week’s most important stories & analyses.

View / Add Comments