DockerCon 2015: What Did We Learn This Week?
The theme of this year’s North American DockerCon was repeated throughout the compressed gauntlet of the exhibition halls, and in a few cases, printed on the backs of T-shirts: Let’s move containerization out of the experimentation phase and into production. In informal surveys Monday and Tuesday at DockerCon 2015 in San Francisco, including calls for a raising of hands, fewer than one-third of developers work in organizations where containers are being deployed in production.
What’s preventing containers from escaping the sandbox is not something so simple as a lack of initiative or willpower from developers. There’s a lack of appropriate tooling. There’s concern over how containerized environments will continue to be supported by vendors in the long term, and whether some vendors will survive that long.
Docker did surprise many with its agreement to create the Open Container Project (OCP). It signaled that it would work with the community, at least by releasing the code for its container format and runtime to the open standards project. Docker will participate in a group that includes CoreOS and many other leading members of the Linux Foundation, including work with a host of technology companies that will develop a draft spec for the OCP. At the time of the conference, some 20 companies had pledged to participate, although Docker Inc. CTO Solomon Hykes said others were planning to join just within the next few days.
Up until last week, CoreOS was pushing its appc container format, as well as actively building a business based upon its technology for auto-updating servers, much like a Google Chrome browser auto updates itself, no manual intervention required.
The formation of OCP is intended to give customers some more confidence in container technology. Both companies will pursue developing their own container capabilities as will the myriad of others, such as VMware and Canonical, which developed LXD, a Linux container hypervisor.
We shared some of the concerns we heard from attendees about Docker maintaining its leadership position in containers after the formation of OCP, with Docker Inc. VP of Marketing Dave Messina. His response also appeared to address an observation made here last week, about Docker Inc.’s cleverness in proposing runC as soon as it did:
Docker’s platform, which covers the entire development lifecycle and makes distributed applications 100 percent portable, is not readily replicable, even when leveraging our container format and runtime. Now that the entire industry is aligned on a common standard, Docker is able to focus its efforts fully on its platform, which as of DockerCon, has become even more extensible with a dynamic plugin architecture and multi-host networking. Docker’s platform for distributed application has been validated by organizations such as the GSA, CE, the Walt Disney Company, Orbitz and Lyft — companies that all attended DockerCon to share their experiences building powerful new applications on Docker.
The Ecosystem Has Some Questions
There’s doubt over whether Docker can survive the jump-start phase of its own business evolution — doubt that extends far more deeply into the ecosystem than we realized just a few days ago.
“Right or wrong, it’s just the state of where CIOs’ mindsets are: They want a big, proven vendor endorsing their Docker deployment,” said Gou Rao, co-founder of container-aware storage provider Portworx, in an interview with The New Stack.
When I asked Michael Miller, SUSE’s vice president of global alliances, flat out whether Docker should assume some leadership role over and above other members of the Open Container Project, he responded quite directly, “I would say no.
“I say this from an open source business development model point of view,” Miller added. “For a technology like Docker, or any other example, there needs to be a viable, sustainable, long-term community. There needs to be a project and a foundation, and there needs to be commitment from enough different contributors and vendors to make it viable over the long term. There’s that tricky point where the folks that started something — it’s kind of time to let go, let the bird out of the nest. How do you know whether it’s developed enough to take off or not? There’s a gamble there.”
There are also seeds of confusion and doubt that continue to be planted, including by developers who may have an interest in format disputes remaining unsettled, but who may not be so willing to go on the record just yet. Some developers and third-party supporters are questioning whether Docker Inc. will attempt to dictate terms for the new Open Container Project, by effectively outlining the subjects of their future discussions and in so doing, by default, prescribe the formats of future container systems.
Last Sunday, we presented the three questions that Docker Inc. would need to address in order to make this year’s DockerCon a success. Without a doubt, the company tackled each of these questions head-on. Let’s take a look at how they went about it, and what the outcome may be.
1. How will Docker reconcile its stateless architecture with the stateful demands of the everyday world? For once, a big question facing a technology conference has effectively been settled there. There is no longer any concern among the major players in the containerization industry, nor among software developers (who paid attention), about the result of this question:
A completely stateless architecture will never happen within the context of a whole container. This fact became obvious and irrefutable during Monday’s demonstration of Docker’s new plugins architecture, which brought together Weaveworks, ClusterHQ, and Glider Labs. Their demo showed how a container could utilize services from two plugins to migrate itself between two IP addresses in a network, and still maintain state — both its binding to databases, and the content of its currently served web pages.
Plugins architecture had to be completely rebuilt, at least a second time if not more, in order to accommodate this. Put another way, the pipeline was opened wider to facilitate state management. It’s the only way anything resembling live migration can be accomplished.
Services running within processes that happen to be containerized may certainly be stateless. But the diminishing importance of that virtue was made clear Monday by no less than Adrian Cockcroft, considered by many developers present to be the “father of microservices.”
“I think we need both techniques,” Cockcroft responded to my question during his Monday session. “The early production uses of Docker they built had a stateless frontend, and their backend databases weren’t containerized. Then we see systems that were doing number crunching for gaming engines, where you’ve got lots of Docker containers running there, and they didn’t really care that much, and it’s mostly stateless.
“But I think there are definitely stateless applications, and part of this maturation process and the tooling and the best practices is getting the stateful things to be in there, too,” Cockcroft continued.
The way Cassandra nodes work, he explained, was though a level of replication that enables one node to be lost and no data to be sacrificed. “They are ephemeral, but they hold state. There’s kind of that hybrid level in-between where you can build systems that have combinations of the two. But certainly, you will want to build systems which have state attached to them. We’ve seen a number of Docker plugins and companies, like ClusterHQ, where you can plug-in and manage your state.”
The architecture of containers and the network nodes that connect them are mostly settled now, he went on. Storage remains an open topic. ClusterHQ could make a dent there, but so could VMware, in his view. By next year, he predicts, this open topic may also be settled. But given those two choices, either way, statelessness will only be a factor in the most granular of contexts.
2. Will Docker present a secure virtual networking model for containers that’s strong enough to convince risk managers it’s worth the investment? This question remains open. Docker Inc. Security Lead Diogo Mónica tried convincing me that networked containers were inherently secure by design, by only enabling connectivity to the outside world explicitly.
At DockerCon, Mónica did a demo of Notary, showing how the still quite nascent platform provides a secure environment for people to publish and verify content. Here is the video from the day one keynote. Mónica’s demo starts at about the 1:25:28 mark.
On Hacker News, Docker’s lead on the project, David Lawrence, said Notary provides cryptographic guarantees that the “base image you’re using did indeed get published by Ubuntu, or RedHat, or even me, and hasn’t been tampered with between their build system and you. It’s up to you whether you decide to trust those publishers.”
A respondent noted that a users could run a registry but that “decision is orthogonal to the task of verifying what you’re installing, or of signing something that might be installed somewhere else.”
Here’s how Notary is described on GitHub:
We often rely on TLS to secure our communications with a web server which is inherently flawed, as any compromise of the server enables malicious content to be substituted for the legitimate content.
With Notary, publishers can sign their content offline using keys kept highly secure. Once the publisher is ready to make the content available, they can push their signed trusted collection to a Notary Server.
Consumers, having acquired the publisher’s public key through a secure channel, can then communicate with any notary server or (insecure) mirror, relying only on the publisher’s key to determine the validity and integrity of the received content.
Docker Inc. has yet to directly face the risk managers whom I’m discussing — the folks who perceive a container as something they use to bring lunch to work with them, and who estimate the risk factor of a technology by calculating how many fires they have not yet seen it start. There are too many enterprises for whom the topic of containerization will never even be on the agenda, until the question of security by design is settled.
On GitHub, it is noted that Notary is still a work in progress. They are now inviting contributions and reviews from the security community. “Notary will need to go through a formal security review process before it should be used in production.”
Worth noting are some of the discussions already in the GitHub issues thread.
3. Does Docker Inc. plan to compete against other container models or “embrace and extend” them? Docker’s Solomon Hykes shaking hands with CoreOS’ Alex Polvi was a demonstration of both grace and moxie by Docker. If you looked closely, you’d also find a primordial form of the kind of swagger we saw over a quarter-century ago from an operating system maker that openly shook hands with a spreadsheet maker … in a move that ended the period of Lotus 1-2-3’s dominance. For now, Docker has kept its competitor at bay.
Business publications ran the story of the Open Container Project’s formation as a coming together of Microsoft, Google, Red Hat, IBM, Cisco and a number of other major names that would normally never do business together. Let’s be honest about this: Microsoft and Google have come together this week under the banner of OCP only to the same degree that Jeb Bush and Rand Paul have come together as candidates in the same political party. For all we know, there were two phone calls, at the close of both of which were the words, “Yeah, sure, whatever.” Since then, company representatives have made themselves available to signal their excitement, or their “super-excitement,” over yet another Linux Foundation initiative.
The absolute truth, at this point, is OCP is an unknown quantity. If Docker has its way with the structure and format of OCP the same way it exerted its influence in endowing OCP with Docker’s own runC universal runtime, then OCP will end up being a formalization of the movement that Docker Inc. has started.
“I left the room feeling very impressed by this strategy,” stated CloudBees CEO Sacha Labourey, speaking with The New Stack. “But let’s make no mistake: At some point, Docker also has a P&L. Sun [Microsystems] didn’t necessarily do a good job of monetizing the J2EE layer. Everybody benefited from it, maybe, except Sun.”
It’s one thing for a company to enable an ecosystem, Labourey explained, but another one entirely for that company to thrive upon it. “My hope is that Docker can benefit from that as well, to monetize the ecosystem it put in place.”
“There’s no point to unnecessarily going around battling everybody,” said Portworx’ Gou Rao. “We’ve got to do the right thing, but we’ve got to do it in stages. Hey, this is cool, new technology that everybody’s going to agree to at some philosophical or academic level. But going in and telling these banks and Fortune 1000 companies, ‘You bought millions of dollars worth of equipment; let’s just throw that away!’ That’s not a practical standpoint. If you live in this world, you have to understand that it’s transitional. We’re not going to switch overnight. Let’s all agree on where the world needs to move to, and that’s this very programmatic, self-provisioned IT environment where stateless and stateful services are being deployed through the same set of tools. But let’s help you get there slowly.”
Docker did take some bold steps towards settling some big arguments. The Open Container Project, for now, is an agreement between vendors to listen to one another and perhaps agree upon something else later in the future. It is not yet an agreement to agree, but it is an agreement to discuss. It has had the immediate effect of neutralizing outright opposition to Docker’s chosen evolutionary course for containers, from CoreOS. And it has successfully injected a big chunk of Docker DNA into a standard with Linux Foundation backing, that might otherwise have required discussion and consultation among competitive participants.
In all, Docker Inc. has made significant progress this week, taking some magnanimous steps and some aggressive ones. By making OCP happen in the first place, it has made the spotlight shine upon containers and their emerging ecosystem, and has promoted it to a topic of everyday discussion among technologists, along with the next version of Android, the last version of Chrome, and the final numbered version of Windows.
That move was indeed a gamble, as SUSE’s Michael Miller put it. Whatever Docker Inc. gained must have come at some cost. All of a sudden, the stage is set for very familiar companies other than Docker and other than CoreOS to produce their own container systems, without owing Docker so much as a kiss goodbye. Don’t think for a moment that IBM and VMware and Red Hat and Microsoft all signed that agreement because they have some great ideas for Docker plugins.
Docker’s gamble may make containers permanently successful. By this time next year, we will know whether that success came at the cost of its own identity.
The New Stack’s Alex Williams contributed to this story.
Cisco, CoreOS, Docker, IBM, Red Hat and Weaveworks are sponsors of The New Stack.