Docker CEO: Integrating Old Apps Is a Big Deal
It came as no shock to anyone to hear Docker Inc. CEO Ben Golub headline a day of DockerCon keynotes last Wednesday by torching Gartner’s well-known, and perhaps well-worn, “bimodal IT” analogy. Data center operators have only one mode in mind, he told attendees: “fast.”
But in an evolution of his message directed towards his company’s growing base of enterprise customers, Golub made some new — and perhaps startling — amendments. Realizing that a minority of an enterprise’s existing applications will ever be completely factored for a cloud-native environment, he conceded that wrapping old, monolithic applications in a new, rapidly deployable container image does present instantaneous performance advantages that may justify an organization’s entire investment in containerization.
“What we’re seeing is that Docker and containers are not so much a discontinuity or a revolution,” stated Golub, “so much as they are part of enabling a journey… a journey that includes old apps and new apps. It includes old infrastructure and new infrastructure. And while we have plenty of customers who are using Docker to get started on things like [the Internet of Things] or microservices or big data or even machine learning, about half of our customers are starting their journey on the other side — they’re starting with their traditional apps. Now, that’s not where their journey is going to end.”
If a traditional app is “Dockerized” — a term we can assume to mean, removed from a first-generation virtual machine, then wrapped in a container image without alteration, pushed to a registry, and deployed — its use of infrastructure may improve by anywhere between 50 and 500 percent, Golub claimed. No benchmark results were presented to back up those figures, though his basic point may be deduced merely from the elimination of the VM lifecycle management platforms which support that monolithic application.
Still, it’s clear from Golub’s talk that Docker Inc. is more willing today to accept that “Dockerizing” will be the entry point for roughly half of its customers’ “journeys” going forward.
“Taking a traditional app without changing a line of code,” said Golub, “you can move that application from an old machine to a new machine, or from an older version of an OS to a newer version of an OS, or from an older data center to a newer data center, or quite frankly, skip the newer data center and move directly to the cloud. And that’s a lot of great value that happens, again, without changing a single line of code.”
“If you’re in ops, you need things to not break.”— Solomon Hykes
It’s a clear evolution of Golub’s message from where it stood last year. At a February 2016 conference of CIOs conducted by The Wall Street Journal, Golub conducted an open discussion with several CIOs about the aims and goals of containerization technology. There, according to a report from CIO Magazine, Golub made the case that containerization finally presented “a great catalyst” for finally moving to a cloud-native model for applications. But live polling conducted during his talk suggested that as many as three CIOs in five were not only skeptical of Golub’s value proposition, they weren’t even certain what it meant.
Golub may have listened to why those CIOs were not laughing and made a critical adjustment to his company’s strategy and message in response. Now, while Docker may arguably be part of a cloud-native ecosystem, it is being portrayed as a part of a bigger ecosystem whose applications are native to something else entirely.
The CEO’s message was bolstered the day before by Docker Inc. CTO Solomon Hykes, who made the case that containerization is a process that involves operations personnel, not just developers. The move to containers won’t happen in enterprises where the tools don’t provide automation that makes improvements happen right away.
“If you’re in ops, you need things to not break,” said Hykes. “You need to guarantee a secure, reliable, steady service no matter what… In a million ways, going to production is really hard. One way, in particular, it is extremely hard is security. Going to production securely is really, really hard. And it’s getting harder for a bunch of reasons. The first reason is, your systems now are distributed systems, and that adds a whole, extra layer of problems to worry about — a whole different area of security to think about.”
Hearing Hykes acknowledge the fact that distributed systems security is an entirely different model than the IT that most organizations have come to know and rely upon, was refreshing and, certainly for some, vindicating. But although Hykes at one point repeated his message about the container ecosystem — “If the ecosystem fails, we fail” — he also delegated the job of security to the part that may fail first.
“We’ve adopted from day one a security process that is security first,” he told attendees. “That’s a conscious decision because we don’t think Docker should take responsibility for securing your Linux subsystem. In fact, we don’t think any single company should take sole responsibility for that. Linux is too big, and it’s too important.”
Of course, with Docker expanding ever more deeply into the Windows realm, it’s certainly impossible for any one company to claim it could secure the entire containerized environment. From a distributed systems developer’s perspective, perhaps that’s obvious. But as Docker Inc. has learned at least once before, what’s obvious to developers or system operators may not have tripped the radars of many CIOs. Perhaps this is the part of the Docker message that needs to evolve next.