ActiveState’s Bernard Golden recently suggested that in the pets vs. cattle discussion, containers must be chickens — gaining maturity faster and more efficiently. But New Relic suggests they might more accurately be considered bacteria.
From data collected from more than 300 New Relic customers using its private beta Docker-monitoring service — using an aggregate of 40,000 to 60,000 containers daily — it found a large percentage had a lifespan of less than an hour, many just two minutes or less.
New Relic started using Docker internally about a year ago to be able to launch a new service and now uses it extensively, but to improve its service, it needed to understand how customers outside the “unicorn” data centers were using it, Abner Germanow, senior director of solutions marketing at New Relic, explains.
Its beta program began in May and it made Docker monitoring generally available last week.
“One of the takeaways from DockerCon is that a lot of companies are trying to build software faster and more consistently, and to iterate on that software very, very quickly. It’s a hard problem, but one that a lot of people are struggling with,” he said.
Among the things the company has learned from the beta, he said:
- A wide variety of companies — by industry vertical and size — are using containers. Some are just dabbling with the technology, while others are using the technology aggressively in production applications.
“It’s not just web-first companies. It’s major enterprises, mid-size companies — it’s pretty across the maps. We see it especially in companies that are trying to iterate very aggressively to change their customer experience, the relationships they have. They want to create new digital experiences on the Web, on mobile, in stores, their manufacturing facilities. In that environment where you have to experiment a lot, Docker removes a lot of the complexity,” he said.
- The number of container lifetimes of less than two minutes indicates customers are building something that’s a net new architecture. It’s fairly different from what people have been building on virtual machines.
- The metrics customers want in using Docker in production is an ongoing conversation. If containers are being used as lightweight virtual machines, then they could be monitored as hosts. If containers are used not exclusively as VMs, but as ephemeral entities, then performance metrics on the behavior of container types make more sense than traditional monitoring of virtual machines.
And in monitoring containers lasting only a minute or two, looking at their performance in aggregate makes more sense than looking at each individually.
Its customer, Motus, estimates that New Relic has helped the company reduce the time to investigate and fix problems with its Docker containers by 30 percent.
Formerly known as Corporate Reimbursement Services (CRS), Motus’ cloud-based mobile applications helps remote workers track their mileage and reimbursement. With just 85 employees, its dev and operations staff is small, needing all the help they can get to streamline their work, according to Scott Rankin, vice president of technology.
In the past few years it moved from PHP, to Java, and in the past year to a microservices-based architecture using Docker. It began using Docker internally about a year ago.
“Using Docker at first internally allowed our dev and QA teams to spin up environments composed of all these various services without spending a lot of time configuring and installing different pieces of software. … At the beginning of 2015, we started moving our staging and production environments to Docker as well,” he said.
Now it’s using Docker for about 99 percent of its applications in production.
“Early on, since this is such a rapidly evolving ecosystem, it wasn’t clear what all the best practices were. We had to invent a lot of that on our own,” he said. “We reworked things a couple of different times as standards come into play, as best practices come into play. It’s always the challenge of the early adopter: You get some of the benefits, but not all the things have been worked out.
“Orchestration is always a challenge. Docker started off being a great way to run an application in a container, but I think everyone has been trying to figure out, ‘OK, we’ve got one application running in a container, but how do we compose a suite of applications? How do we get those things to talk to each other?’
“So we started off by rolling our own dynamic environment service that we built using Grails to dynamically compose Docker applications. As we moved into our staging and production environments, we used the Mesosphere stack. In production now we use Apache Mesos, Marathon and Chronos to manage all those environments, and that’s working fantastically,” he said.
While it’s been a New Relic customer for about four years, without Docker support, Motus lost visibility.
“When we moved Docker into our production environment, things got a little confusing because we kind of lost the link. The application could still report their information to New Relic and the servers could still report their information to New Relic, but before New Relic had its Docker support, the link between the two was missing. That made it more challenging.
“We’d see an application performance issue and have to go hunt for the Docker container and which server it was running on. … [Now] if one of those instances is running not well, we have a much clearer path to seeing where in the stack that issue is happening.”
He says there’s a misconception that only large enterprises with hundreds of thousands of nodes in production can benefit from Docker. But it’s important to choose the right tools.
“Especially now with all the tools coming online — what we’ve seen has been a complete easing of the pipeline of moving code from development through testing and into production,” he said.
ActiveState and Docker are sponsors of The New Stack.