At DockerCon Europe today, Docker, Inc. has announced a new suite of open platform and orchestration services for continuous lifecycle application development and deployment at scale.
The launch also included the first of a set of accompanying open APIs aimed at helping ecosystem partners create products and services that align and integrate with the new Docker orchestration offerings. In high demand from developers, the timeline for future APIs is not for several months, which may disappoint some ecosystem partners who have already been waiting for some time for the “plugin APIs” that will enable them to integrate their ecosystem products with the Docker Engine.
The three new orchestration services announced today are meant to provide users with Docker application development, deployment and production environments that are managed for machine, cluster and distributed architecture environments.
Apps before were run on a few discrete containers, said CEO Ben Golub in an interview this week. Now developers are running multiple containers across large numbers of servers and potentially across different data centers. It means a deeper demand for orchestration, clustering, networking and storage capabilities that can be performed thematically in a portable manner.
Docker executives sees the creation of its services as an essential response to the rapid growth of Docker’s containerized approach. Citing an ecosystem of 18,000 tools and 60,000 Dockerized apps, Docker said in its announcement that the rapid uptake of Docker in application environments is part of a ‘transformational shift’ that in turn has created the need for adaptive orchestration capabilities. Development and DevOps environments have moved from “slow-moving monolithic applications” to “Dockerized distributed applications”. As a result, developers need to string together multi-container, multi-host application stacks that can work independent of the underlying hosting infrastructure. The new Docker orchestration services aim to provide this flexibility via a common user interface and tooling.
A Closer Look at the Orchestration Offerings
Docker Machine provides a single user interface to enable a developer to prepare a host to run Dockerized distributed applications. From the user interface platform, developers can provision Docker Engine onto any Linux host machine, whether that be bare metal a virtual machine or in the cloud. Docker says this speeds up the provisioning process to seconds.
Docker Swarm automatically optimizes application workloads across a cluster of host machines, in any sort of hybrid environment. Again, the single user interface allows devs and DevOps to use Docker Swarm’s automatic clustering to optimize workloads and can also set custom constraints to manage more complex workload placements.
Docker Compose enables orchestration across multiple containers. Database, web and load balance containers, for example, can all be assembled into a distributed application across multiple hosts. The orchestration is composed by expressing container dependencies in a YAML file and, again, managing via the Docker user interface.
Open APIs to Enable Ecosystem Partners to Create Complementary Products
Along with the launch of the services, Docker has also released an open API for Docker Machine, with APIs for Swarm and Compose expected in the first half of 2015.
The open APIs will be an essential way for Docker users to be able to customize their orchestration and use their own choice of ecosystem tools. Initially, the orchestration services come with Docker’s own default implementation (what Docker calls “batteries included”).
In the new year, when the APIs for Swarm and Compose are made available, this will mean that developers can create alternative implementation configurations to those set by Docker (what Docker calls “batteries removable”). For example, for Docker Swarm, devs will be able to use ecosystem partner products that may create specific clustering or scheduling services based off the Swarm feature-set. Ecosystem partners would use the Swarm API to create such products.
Similarly, APIs will enable devs to use the orchestration services with a single container. Docker notes that while the majority of the community use multi-containers, single container use cases have tripled and Docker aims to continue supporting this use case.
While all three orchestration services are in alpha release, ecosystem partners will be unable to create new tooling products and services for Swarm or Compose until the APIs become available.
But Docker has not had a strong history in supporting ecosystems to develop complementary products via API, so the potential six months wait for ecosystem providers to integrate feature sets from the orchestration services into their tooling may be met with some angst.
“We heard initially that Docker would be an API enabled system, but we are yet to see that. So we are interested in hearing how the APIs are going to be handled,” said CTO Fernando Mayo in the days prior to DockerCon Europe.
“If I have to go out of my way to make my alternative work within Docker, then its not a real alternative. It relegates an ecosystem solution to being a second tier tool,” added Tutum CEO, Borja Burgos. At the time, Burgos was talking about Docker’s move into storage and networking capabilities, but the comment is just as relevant to ecosystem partner’s capacities to create alternative approaches to Docker’s default orchestration configurations.
Should Docker Be The Orchestration Platform As Well?
Prior to DockerCon Europe, Senior Director of Strategy at Red Hat, Lars Herrmann, acknowledged the need for more container orchestration services, but is unsure if Docker should be the one to provide such offerings.
In early November, Red Hat beta-released the Red Hat Enterprise Linux 7 Atomic Host to enable developers to run agnostic application containers. While being able to run Docker containers, the new release focused just as much on enabling Linux containers to be used more generally.
“Docker has been a great project, but Docker the company has a certain vision of where they want to take the technology, so in some ways we can certainly recognize what CoreOS have now done,” said Herrmann.
“Containers is as much about a core capability in the Linux operating system as it is about Docker making it easy to interact with Linux containers. Linux containers can operate without Docker, but Docker makes it easier, which was the disruptive moment eighteen months ago. Docker certainly helped speed up development workflows.
“At the time we made the decision to partner with Docker, Docker’s mission was to drive containers. Our work was about combining the disruptive potential of Docker with the robustness of Linux container services. Despite all this hype and excitement about Docker, we are still at the beginning of that journey. We have seen so much innovation in the last six months around orchestrating complex applications in hybrid environments. There is a lot of complexity involved in making many pieces talk to each other.
“We are advocating for a more modular, loose-coupling of best in breed services that can be integrated, without having to drive every single capability into a single code base and ultimately a single binary,” Herrmann said.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Enable, Docker.