Case Study / Events / Technology /

How Docker Turbocharged Uber’s Deployments

7 months ago, By

Docker

No matter what you think of its politics, there’s no doubt Uber is synonymous with innovation, as it disrupts the transportation industry while leading the sharing economy. But the problem that the fastest innovators have always faced—including even Microsoft, Apple and Amazon—is that once you’ve started the innovation and hit the ground running, you start going so fast that you get caught up going so fast that sometimes you lose track of the bigger picture and sometimes you start tripping over the build up along the way.

That’s where Uber found itself at the start of this year when software engineer Casper S. Jensen joined the computer platform team.

During the first day of Dockercon EU, Jensen started his talk by discussing how the Uber app may have an easy-to-use user interface, but it’s nothing but a simple app, it is “actually a huge, huge thing” and that “the app is just the tip of the iceberg,” with countless features underneath. After all, Uber’s currently in 69 countries each with their own marketing and regulations, running a million trips a day, with 4,000 employees using the platform.

Legacy Software Development Patterns

Jensen and the four other members of his team were all fairly new to Uber when they were looking for a solution to “a fair amount of frustration” that came from what they were working on.

This is how their development process was running just last winter:

  1. Write service RFC (Request for Comments)—Uber is a company based heavily on feedback. Before beginning anything new, they start with describing the architecture and reasoning behind a new service and then distribute it to the mailing list.
  2. Wait for feedback—like, “Have you heard about these guys doing the same thing elsewhere,” focusing on catching mistakes early on.
  3. Do all necessary scaffolding by hand.
  4. Start developing your service.
  5. Wait for infrastructure team to write service scaffolding.
  6. Wait for IT to locate services.
  7. Wait for infrastructure team to provision services.
  8. Deploy to development servers and test.
  9. Deploy to production.
  10. Monitor iterate.

He described steps five through seven as “the really, really painful part. These steps could easily take days and, in some cases, weeks.” Why is that? “It’s not because the steps are difficult, we had scripts for most of them,” involving only about ten lines of integration.

“It was fairly simple, but it didn’t scale because we only had a small limited set of people in the company who actually knew how to do that without breaking things,” Jensen said. This combined with small mistakes—like putting slashes instead of dashes—that slowed dramatically everything down.

In February 2015, an internal email went around setting the following objective:

 

Uber-to-Docker

Jensen said they wanted to:

  • Allow service owners to have some dedicated slice where they can install whatever they want and we don’t care what they install but it can’t affect other services.
  • And, by doing that, they do it in a way that we don’t care what they’re doing it.

Something had to change without breaking a thing.

Uber’s own barriers to overcome

When you have a company that has such a rapidly growing infrastructure, you have certain restrictions, including, as Jensen said, “somehow we had to do that when the rest of our team was racing.”

Not only does Uber require 24/7 availability and uptime, with tons of localized features, “none of us have all seen all of the stuff that’s Uber. We’re all seeing that small slice we work on.” He referenced features like UberPOOL, UberKITTENs, UberIceCream, and UberEATS, each “adding new features like there’s no tomorrow.” Uber’s meteoric success is based on hyper growth in all dimensions, including data centers, servers, and infrastructure. They needed a solution that could maintain that growth.

“We want to have really easy processes and really easy infrastructure so that our feature developers can add things really quickly. One of the most important parts of that is the process of creating new services,” Jensen said. “We realized this meant Docker.” He said it was easy to decide Docker was the route for them because it “was really easy to explain, people had read about it, understood the simple concepts about it.” He said Docker was an easy sell to the dev community with all the momentum surrounding everyone’s favorite container.

Shipping container-sized growing pains

They said to themselves, “We can all write code, this should be easy, right? Two days and we’re done.” Not so much. While they made this decision back in February, it took them until mid-summer before they were using Docker.

Jensen explained that, with Docker, “Everything just changes a bit, we need to think about stuff differently.”

One of the biggest barriers to Docker adoption was Uber’s in-house cluster management system uDeploy. It needed to continue to do rolling upgrades but with support for intrinsic rollbacks. It has a number of triggers to say something is wrong, like health checks or graphites if they suddenly go haywire. It also includes load tests and integrations tests that roll back quickly if something is put out there. uDeploy includes:

  • 4,000 upgrades per week
  • 3,000 builds per week
  • 300 rollbacks per week
  • managed more than 600 services in the system

There was simply no way to get rid of or phase out uDeploy, so instead the Uber team decided it should deploy both legacy services and Docker services.

“This also meant that I spent a lot of time going through this and, for every feature that we had, adding support for Docker services,” Jensen said. “When we are able to show standard out and standard error in uDeploy, we also have to do that in Docker.”

They initiated Docker without much planning giving what Jensen realized was too much freedom to developers at first. “It’s not like this,” he said, snapping his fingers. “You really need to rethink all of the parts of your infrastructure.”

Jensen said that if you plan ahead, really looking at your infrastructure and how containers play their small role in it, Docker’s end result will be much smoother, much better.

How Docker is driving the newly scalable Uber

Now Uber is about a third Dockerized but looking toward one hundred percent soon. Why? While the transition was painful, the end result was what they had hoped for, getting rid of their three greatest pain points that stifled continuous deployment. With Docker, they no longer had to:

  • Wait for the infrastructure team to write service scaffolding.
  • Wait for IT to locate services.
  • Wait for infrastructure team to provision services.

Now, they do all the necessary scaffolding not by hand and not by copying from previous projects, but by using tools that contain configuration and built files he said that for standardized services, it’s smooth sailing and provisioning time takes about ten minutes to do what took hours and days before.

Beyond this process, Uber recognizes that Docker removed team dependencies, offering more freedom because members were no longer tied to specific frameworks or specific versions. Framework and service pawners are now able to experiment with new technologies and to manage their own environments.

Docker is a sponsor of The New Stack.


A digest of the week’s most important stories & analyses.

View / Add Comments