Editor’s Note: The following post originally appeared on the ActiveState blog under the headline: Docker and the Application Supply Chain Challenge. ActiveState is a sponsor of The New Stack. The post explains why we need to think of containers as not the end result but simply as a means for shipping code. Shipping containers are an apt metaphor for today’s new stack world but in isolation containers are meaningless. The difference is in the ecosystem that supports the container as it moves from its origination to its final destination.
If you work in IT, you’d have to have been sleeping under a rock not to be aware of the Docker craze. Below the radar twelve months ago, today Docker is being hailed as the solution for all things IT — particularly the intractable and perennial problem of how to achieve application portability across heterogeneous IT infrastructures.
At ActiveState, we’re strong supporters of Docker; in fact, we believe we were the first technology vendor to ship a product incorporating Docker in our Stackato 3.0 release. However, despite our enthusiasm for the technology, we feel it’s important for IT personnel to understand what Docker does — and does not — deliver. Our perspective on the shortcomings of Docker can be summed up in this phrase: Docker and the Application Supply Chain Challenge.
To understand this challenge it’s useful to examine the term Docker uses to describe its capability: container. This phrase has been used in IT for years to communicate the purpose of an operating system container: to provide a structured, isolated application execution environment that can be easily transported from place to place.
The genesis of this term arises from the physical world: it refers to the shipping container, used to transport goods from one end of the earth to the other. These omnipresent containers enable great efficiency and low cost of shipping goods, and today we take it for granted that goods can be quickly and safely shipped across land and sea.
However, the shipping container, as described in the fascinating history “The Box,” was, when first created, not very useful. Arriving at today’s seamless end-to-end carriage of shipping containers required change and innovation throughout the cargo supply chain:
These behemoth cranes transfer containers from one location to another (e.g., from a ship to a loading dock). These cranes are enormously tall and powerful, making them capable of carrying the tens of thousands of pounds of cargo commonly held in a container. A few years ago, a ship carrying some cranes from China to Oakland, CA had to wait for low tide to pass under the Golden Gate bridge, which should give some sense of their size.
Boxcars, which were used to transport individual cargo items loaded and unloaded by hand, were made obsolete by the move to containers, thereby requiring replacement by new types of flatcars suited to transport full-size containers.
Trucks and Trailers
To get individual containers to desired locations, it is necessary to truck them to a delivery spot. This required new trailers that could secure and carry a fully loaded container, which in turn required more powerful tractor trucks.
The efficiency and low cost of container shipping helped increase global trade, thereby requiring greater scale of transport — thus, the container ship. This increase in scale has continued to today, resulting in the move to so-called post-Panamax container ships, so large that the Panama Canal requires expansion just to handle them.
In summary, while the container was undoubtedly a tremendous innovation in shipping, it required innovation in the surrounding ecosystem in order for the cargo supply chain to achieve the full potential of container shipping. Absent the supporting mechanisms, the usefulness of the shipping container on its own was miniscule. With a richer ecosystem, the shipping container revolutionized the physical product supply chain.
Likewise, it’s important to understand that Docker is analogous to the shipping container (in fact, its logo pays obvious homage to the shipping container, as can be seen in the image below).
The current mania for Docker indicates its obvious promise: a portable application execution mechanism that operates extremely rapidly while requiring far fewer resources than a dedicated operating system, which is required by the current portable execution environment favorite, virtualization.
However, just as the shipping container on its own failed to deliver all the benefits it potentially enabled, so too does Docker require a surrounding ecosystem of functionality so that Docker users can obtain all the benefits they desire.
What kind of surrounding functionality?
Here are some of the things we see as necessary for Docker to fulfill its potential as an application enabler — another way of saying this would be these are the things that need to surround Docker for Docker users to obtain the full benefits of end-to-end application agility.
Tying Dev to Ops
As my colleague Phil Whelan wrote in his blog post of a few days ago, many enthusiasts of the DevOps approach focus primarily on the operations part of the equation and ignore or downplay the developer side of things. I would put it this way: the application lifecycle begins when a developer puts fingers to keyboard and ends when the application goes into production (actually, it extends beyond that — see the item below on application versioning). It’s critical that the application tools and processes are joined to the operations tools and processes. Docker provides the vehicle for this, but the surrounding tools and processes need to seamlessly enable migration of the Docker container across all groups and through all application steps.
Enabling consistent operational constructs across all deployment environments
Having a portable execution format doesn’t help if the surrounding lifecycle tools are tuned for a single execution environment. It’s vital that the development, deployment, and management tools work across current and potential future execution environments. Succinctly, if your application processes only work on AWS or in your custom Kubernetes infrastructure, you haven’t really achieved portability — you’re chained to a single environment, despite your portable execution format. The reality of enterprises is that they standardize — on multiples of everything (viz, all enterprises have “standardized” on both Oracle and SQLServer databases, but undoubtedly also have plenty of DB2 and MySQL around). Your application management system needs to be portable just as your application execution format is portable.
Support application version control
As I noted above, the application lifecycle actually extends beyond the release of the application into production — because every non-trivial application ends up being modified, improved, and extended. It’s critical that your Docker-based application support version changes of your application and that your execution environment be able to track and release new versions into production — and, by the way, be able to gracefully roll back to a previous version if problems are found with a new version. Docker can make this easier, but on its own has no versioning capability. The development and operations toolchain should provide this capability to surround use of the Docker execution environment.
Support application A/B functionality testing
I just mentioned versioning, but it’s important to note that even the concept of versioning is morphing: applications are transitioning from rarely-changed instantiations of software packages to frequently-changed aggregations of component-based microservices and internally- and externally-served functionality services. As such, it’s vital that your application process be capable of pushing out frequent, small changes to a portion of the production environment, directing some amount of user traffic to the changed environment, and incrementally increasing the proportion of the environment running the updated code until the entire application is updated. Obviously, Docker supports this by providing easy use of multiple execution environments so that, say 90% of the Docker containers are running the old code and 10% are running the new, but, again, Docker does not provide that on its own. With the right surrounding lifecycle capability, Docker can be leveraged to enable A/B functionality testing, but this relies on other product’s capabilities to implement.
As you might guess, ActiveState, an early enthusiast for Docker, provides the extra-Docker functionality called for above. We believe that Docker is going to revolutionize the application world, based on its high efficiency and execution format portability. Recognizing its promise — and limitations — we have integrated Docker into our Stackato PaaS offering to ensure users have necessary application lifecycle support so that they may obtain the full benefits available to a Docker-based application environment.
Named in Wired.com as one of the ten most influential people in cloud computing, Bernard Golden has extensive experience working with organizations to help them adopt and integrate cloud computing effectively. He will help ActiveState customers apply best practices and meet their goals as they leverage the cloud with Stackato. Prior to joining ActiveState, he was Senior Director, Cloud Computing Enterprise Solutions, for Dell Enstratius. Before joining Dell Enstratius, Bernard was CEO of HyperStratus, a cloud computing consultancy serving enterprises and service providers across the globe. Bernard acts as an advisor for organizations that leverage his cloud computing expertise to accelerate their success, such as Nirmata, and the Cloud Network of Women (CloudNOW). He is the author or co-author of four books on virtualization, including Virtualization for Dummies, the most popular book on the topic ever published, and Amazon Web Services for Dummies. Bernard is also the cloud computing advisor for CIO Magazine, where his highly respected blog is read by tens of thousands of people each month. He is a highly regarded speaker and presents at conferences throughout the world.