Containers / Kubernetes

Docker Gets an Alternative Orchestrator to Swarm in Tutum Acquisition

21 Oct 2015 3:49pm, by

One of the most predominant messages from Dockercon attendees last June was that they needed clear paths for their containers, from the development and testing phases through to production.

Docker got the word. Wednesday, the company took a bold step in fulfilling that request, acquiring a small container support firm, Madrid/New York-based Tutum, which offers a deployment and management service that could help address these user requirements.

“Ultimately, the vision for us was to get any application running anywhere,” said Tutum CEO Borja Burgos-Galindo, speaking with The New Stack. Like Docker Inc. itself, Tutum built a business around deploying containers on its own infrastructure, but then Tutum quickly repositioned itself as a deployment manager for multiple infrastructures, including cloud platforms.

The New Node

“All of our technology stack from day one was developed around Docker technology, mostly because it makes things easier,” said Burgos-Galindo. “So when we transitioned to letting people bring their own infrastructure and deploying any application on that infrastructure, the fact that we were Docker-native made things much easier.”

When describing its deployment and management platform, Tutum to date has used the “O” word — orchestration — as its key provision, to distinguish the service against both the basic, scripted Docker Swarm and the live, performance-oriented Kubernetes and Mesosphere. But Tutum presents its services in more the manner that an Amazon subscriber or GitHub user would expect: a less dazzling, but possibly more direct, menu-based deployment itinerary run from the browser. Scripts are involved, as well as the occasional terminal window, but with plenty of step-by-step guidance.

When adopting a Tutum frame of mind, it’s important to think about what you, the container deployer, are publishing as the service, rather than what the infrastructure is providing you.

Tutum’s orchestration model actually hides much of the container-based aspect of deployment. It considers the object to be deployed to be a service, regardless of the actual number of containers it may contain. This approach makes scalability less of an issue for the deployer, as Tutum’s job is to examine the rules for multiple available infrastructure providers, including both Docker Hub and private registries, determining the most reasonable deployment approach for a given service, at a given time. It could conceivably split containers among infrastructure providers as necessary.

When adopting a Tutum frame of mind, it’s important to think about what you, the container deployer, are publishing as the service, rather than what the infrastructure is providing you. In Tutum’s vernacular, a server or group of servers ready for service deployment on the same infrastructure is a node. A data center with many nodes may be addressed as a node cluster. Docker Swarm also uses the term “node” to refer to a single server resource, and it refers to a pool of such resources as a “swarm.”

The Tutum dashboard

The Tutum dashboard

The Tutum model may actually be a bit less virtualized than the Swarm model, in that Tutum makes strategic decisions about the best mix of deployments at deployment time, rather than assuming the swarm pre-exists. When containers stop or go down for whatever reason, Tutum utilizes auto-restart and auto-destroy rules to determine how best to handle these situations — again, as they happen.

“We abstracted a way of provisioning; we abstracted a way for application containerization using Docker,” said Burgos-Galindo, “and then we automated the process from end-to-end.”

The New End-to-End

After taking a “let the customer decide” approach this spring to the question of whether containerized applications should be stateless by design, Docker Inc. is acquiring a firm that, while very friendly to Docker from the beginning, has openly professed that such applications should indeed be stateless, in order to pursue the goal of immutable infrastructure.

As Tutum and others define this, immutability is the ideal that services should remain stable once deployed, and never be upgraded in-place. Services and the containers that provide them will fail, noted Tutum CTO Fernando Mayo in a recent blog post. You simply replace them as failures happen, Mayo advised, with the same or newer container images, on whatever infrastructure is most viable at the time, then use the Tutum platform to adjust and balance traffic.

“Once the redeploy process is started,” wrote Mayo, “Tutum will replace existing containers with new ones, one-by-one. We provide a tutum/haproxy [load balancing] image that is automatically configured based on its linked containers. Whether you deploy it locally using Docker links, or launched inside Tutum, it will automatically reconfigure itself when linked services scale up or down, or get redeployed.”

Tutum also maintains the notion of a set of interrelated services, which (naturally) is called a stack. With Tutum, you specify the components of a stack and the interrelationships between them using a YAML file, which Tutum can produce automatically as you drag-and-drop services onto the stack (not unlike uploading files to Dropbox).

Docker’s principal cast of characters since last February has included Docker Machine, Compose, and Swarm, all of which work pretty much the same way on a developer’s laptop as they do in a massive data center. That’s been to Docker’s advantage up to now, but just because a bicycle can go up a hill does not mean the bicycle is the best tool for steep mountain terrains.

lan-screen1

In speaking with The New Stack, Tutum CEO Burgos-Galindo (whose role will likely shift, now that he and his firm are a part of Docker) advocated for an all-terrain approach to Docker container management, one that looks a lot less like Docker’s earlier model, and much more like Tutum.

“We wanted to enable developers who fell in love with Docker in the early days, and had something up and running on their laptops, to be able to take that on to a more production-type cloud deployment,” Burgos-Galindo said. “If they have to change the way they work to go from development to the production environment that’s running in the cloud, from the laptop to that node running on Amazon, then we’ve failed. We [would be] asking them to re-evaluate how they work, to go from development to production.

“So what we’re advocating here is portability. We adhere to other ways of doing things, and made it trivially easy to use the same compose map that you’re using for your development environment, and tweak it evenly to deploy to production.”

Docker is a sponsor of The New Stack.

Feature image: Lotus Illinois railroad tracks, by Dual Freq. Licensed under Creative Commons Attribution-Share Alike 3.0 Unported.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.