When cloud computing first made its appearance, most people viewed it as a cost-reduction convenience. Soon, though, many organizations began to recognize its power to transform IT on a deeper level.Terms such as “cloud-native” and “cattle not pets” expressed the understanding that cloud-based IT required a fundamental mindset shift, away from treating infrastructure components as large, expensive, specialized, handcrafted, and slow-to-change.
Containers are taking this transformation to the next level. Docker has captured the industry’s imagination with breathtaking speed. It began in similar fashion to cloud, seeming to provide a more convenient solution to existing packaging and deployment problems. In reality, though, containers point the way towards an even more profound mindset shift than cloud.
While cloud computing changed how we manage “machines,” it didn’t change the basic things we managed. Containers, on the other hand, promise a world that transcends our attachment to traditional servers and operating systems altogether. They truly shift the emphasis to applications and application components.
In a testament to the rapidity of Docker’s ascent, the conversation has quickly shifted to its readiness for production enterprise use. Blog posts chronicling experiences running Docker in production duel with others detailing the ways in which it’s not yet viable. This binary argument misses the nature of technology adoption. The fact that a craft has proven itself seaworthy doesn’t obviate the need to figure out how to navigate the ocean with it.
Containers make many things possible, without necessarily accomplishing any of them by themselves. Almost immediately after the excitement of recognizing the power of containers, one begins the more laborious process of figuring out how to use them for practical purposes. Immediate issues include questions such as:
- How do containers communicate across operating system and network boundaries?
- What’s the best way to configure them and manage their lifecycles?
- How do you monitor them?
- How do you actually compose them into larger systems, and how do you manage those composite systems?
Various answers to these questions have begun to emerge. Packaging tools such as Packer bridge configuration automation with immutable infrastructure. Cluster management systems such as Kubernetes layer replication, health maintenance, and network management on top of raw containers. Platform-as-a-Service offerings such as Cloud Foundry and OpenShift are embracing containers within their own architectural models.
These higher-order systems answer some of the initial questions that arise while trying to deploy containers. They also, though, raise new questions of their own. Now, instead of asking how to manage and compose containers, one has to ask how to manage and compose the container management, deployment, and operations toolchain.
This process is a recursive one. At the moment, we can’t know where it will end. What does it mean, for example, to run Kubernetes on top of Mesos? Contemplating that question involves understanding and interrelating no less than three unfamiliar technologies and operating models.
More importantly, though, organizations are just beginning to contemplate how to integrate the container model into their enterprise architectures, organizations, and conceptual frameworks. This process will be a journey of its own. It will consist of a combination of adaptation and transformation.
The precise path and destination of that journey are both unknown, and will depend to a large degree on each organization’s individual history, capabilities, and style.
Docker is a sponsor of The New Stack.
Feature Image via Pixabay, licensed under CCo.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Docker.