Docker’s Future is in the Orchestration
Docker sold DotCloud to a German company today, shedding its vestiges of the company’s origins as a PaaS provider. The sale follows the acquisition of Orchard, the maker of Fig, a multi-host Docker orchestration technology. Docker is already serving as a core to several new PaaS providers. But orchestrating containers for purposes of portability and communication between multiple hosts is what truly could be a critical shift, impacting the basic workflows of companies that for years have used virtual machines for moving and orchestrating applications.
In a New Stack Analyst recording last month at DockerCon, there were sure signs that Docker and many other companies see orchestration as significant in the future of the container technology.
- The news of the Orchard acquisition reflects on Docker’s still very young state and how orchestration plays a pivotal role in the evolution of containers and their evolving capabilities. Further, it shows how Docker is as much a technology provider as it is a potential competitor to the companies offering orchestration platforms.
- But there is a high degree of cooperation, considering the complexities that arise from Docker’s single host capabilities to a multiple container host model.
- CoreOs is a substrate and packaged as a Linux OS with etcd, the open source distributed key-value store for configuration and Fleet for management of the scheduling. Docker containers run on top. Scheduling is the big issue here and others are tackling the issue, too. Mesos is an orchestration platform that uses Apache Aurora as a scheduler. Zookeeeper is another Apache project that does server configuration. These are two scheduling technologies that can be integrated with the CoreOs technology.
- Consider the operating system market and it’s evident that new lightweight systems that lighten the workload are immensely appealing to users, and that orchestration is critical when container services work at scale. The time it takes to wait for an OS provider to update can be months or even years. CoreOS is attacking that soft part of the OS market with its automated updating of the OS in distributed environments. Red Hat is adapting with its Project Atomic.
- The application ecosystem is evolving with these new orchestration technologies. Deis, for example, is using CoreOS for its PaaS. They use it for the abstraction and the way app development can be managed. It’s this software delivery pipeline that is the biggest pain point that companies face. Deis works without needing to have to work with operations to get code moved off the laptop and onto the network. New Relic is using Docker in a similar way with its own homegrown PaaS. For New Relic, it’s a self-serve testing and staging environment. It even has some aspects to it that allow for a self-serve production environment.
- There is a strong correlation to continuous delivery processes. If looking at continued deployment, you historically needed smart engineers to get them built and running. Now you don’t need to be a rock star. The platforms using Docker such as Shippable, Drone, and Travis CI all have Docker projects underway. State of the art has been Jenkins for a number of years. Companies like CloudBees are now in a world of much more competition.
- The Google interest in Docker comes from running containers at a massive scale. And scheduling, again, is still the biggest challenge.
- CenturyLink Labs is building out a system for educating the public in how to use Docker. There is a huge gap in understanding. As Carlson said: “There’s a big difference between a ‘hello world,’ approach and more sophisticated usage.”
- Docker developed the Docker Hub as a homage to GitHub and to make it a way for DevOps pros to use Docker effectively.
- One of the biggest missing pieces with PaaS has been the ability to work collaboratively. The social aspects of Docker and Docker Hub make it something unprecedented for IT.
- Docker is still a long way from simplicity as illustrated in Docker interserver communication. Libswarm, for example, is a step in that direction. Multiple Docker hosts have not been able to communicate. It has lacked an abstraction for orchestration. It’s still complex to see the containers running in three environments, for example. If a company wants to transfer containers between DO and Google, across the wire and scale up, it is an indication of what Docker seeks to do. That sort of portability and cross-cloud stuff will become reality in short order.
- “Docker attacks the fratures that have been put together in some Rube Goldberg fashion and makes them workable and effective,” said Paul. “Why are we building all this complicated stuff? The goal: rewrite the plumbing.”
- The metaphor: The Internet has technical problems but it is one of the robust choices of technologies that are put there. As a solution it is robust and the only technology that will be remembered 100 years from now. There should be tools that are easily used and that easily fix things without having it fall apart.
- Where Libswarm is strong is a good place to start. How do you define a Mesos? How do you define an etcd so they are composable? For most cases, the plumbing has been rated as a stream of bytes. It does not matter what it is. The Internet just cares that it is a stream of bytes. And so we go forward with the challenge of what a scheduler is and what the interfaces are that it exposes. What is an effective key value store and what does it expose?
- It’s the microservices that matter. Docker takes it to another level. It will help transition apps into a microservices approach. The Docker Hub helps collaborate the same way. It’s a Petri dish waiting for microservices to emerge.
- Let’s say you have WP, MySQl, etc. running in a single host. It has complex interactions. A simple change may have a broader effect on the overall services. I can clearly define that. Microservices can make it far simpler to manage. A combination of microservices should be easier to manage.
- Tracing is a big issue. How do you go through and get insight into what is happening in these services to help the developer or admin understand the problems in the system?
- Nagios used to be most effective for monitoring. The next wave of monitoring will be a big thing. A transaction can be tracked through New Relic, but if someone built a more generic model it could do transactions. The next big wave will be monitoring related.
- Tiny microservices that could be traced in containers would be beneficial. Monitoring is another microservice that could be added to an application.