Docker in the Production Environment: Successes, Frustrations and Lessons Learned
Docker’s rapid growth in 2014 ended with several key Docker peeps declaring that the container service infrastructure had reached production-ready status. This year, analysts like Gartner have already flagged the security challenges of Docker for deploying distributed applications in the enterprise, yet are fairly supportive of the direction Docker is heading in overall. One month into the year, more than a handful of examples have arisen that are testing the true production-readiness of using containers to allow for continuous improvement and ongoing deployment.
Experiences range from believers who are confident that a distributed web app can be deployed at scale using Docker, those who have incorporated Docker into their production environment, those who have chosen not to do so just yet, and those who have rejected Docker as too complex or unstable for real life use cases.
Here’s a look at four examples from this year, demonstrating how Docker is being considered for production environments:
Battlefy: Shipping New Features
A recent blog post by Software Engineer Jaime Bueza shows how startup Battlefy is using Docker with Jenkins to quickly build and push Docker images before deploying them to AWS Elastic Beanstalk when releasing new features on their eSports platform, or to fix bugs. In the last five months, Battlefy has grown from 100 to 400,000 visitors, in an industry that is expected to see global revenues rise by 24% this year, and with an international user base already above 70 million.
Battlefy starts with a GitHub pull request for a feature or bug, links to a JIRA ticket, and then draws on beta tool Screener to detect DOM changes and screenshot differences on a per-build basis. Results get sent to the team’s Slack channel, and when review team members give the code two thumbs up emojis, Jenkins ships the new code to AWS S3 where Docker containers are used to build a pre-production environment. Following another round of Screener front-end testing in pre-production, Jenkins is then able to automate the merging of the pull request into the master production environment.
Wary of being stuck with any glitches in production, Battlefy uses the AWS Elastic Beanstalk so that if the the Docker images that are built, pushed and deployed are erroneous, Battlefy can quickly rollback to the previous version.
Iron.io: Applying Docker in a Microservices Context
Iron.io — makers of IronMQ message queuing system and IronWorker, a asynchronous task processing tool — proudly see themselves as early adopters of Docker, and for them it makes perfect sense to the way they see microservices architecture as becoming the standardized model for runtime environments.
In a blog post last week, Director of Channels and Integrations Ivan Dwyer, explained that for Iron.io, they could avoid the serious production challenges of security, discovery and failure because they have integrated Docker into their system at a container-level:
“…we treat each task container as an ephemeral computing resource. Persistence, redundancy, availability – all the things we care so much about when building out our products at the service level, do not necessarily apply at the individual task container level. Our concern in that regard is essentially limited to ensuring runtime occurs when it’s supposed to, allowing us to be confident in our heavy use of Docker today.”
IronWorker has over 15 stacks of Docker images in block storage that provide language and library environments for running code. IronWorker customers then draw only on the libraries they need to write their code, upload it to Iron.io’s S3 file storage, where their message queuing service merges the base Docker images with the user’s code package in a new container, runs the process, and then destroys the container.
Iron.io are working in a microservices context that is unavailable to many legacy enterprise production environments who are not nearly as composable as what Iron.io supports. But for newer application development environments, Iron.io can use Docker in a production environment to help their end users manage costs and scale up processes as needed within their orchestration infrastructure.
Mikamai: Devshop Looks for Docker Deployments with Opsworks
Developer Giovanni Intini from devshop Mikamai sums up some of the common concerns that many mature developers are sharing around Docker: on the face of it, they love the idea, and they love the potential. But they have also been around the block a few times, and are cautious of adopting new technologies too fast that might result in them having to pull all-nighters or give up a three-day weekend when deploying in production. That might have been fun as a new coder in their early twenties, but with a life outside of work in their thirties and older, the risks of adopting new tech in production-ready environments is a more heavily weighted deciding factor.
All the same, Intini sees the potential of Docker and since the cloud-based DevOps ecosystem has not matured sufficiently, he has built new open source projects to enable production deployment of Dockerized container services using established services like Amazon’s Opsworks (which is not currently built to support Docker).
Intini’s application architecture requires a load balancer, frontend webserver, haproxy to avoid any downtime, application containers, Redis, PostgreSQL, cron, and async processing. He wanted to build his application as a dockerized application that is scalable. The problem was that as his application runs on the Amazon Web Services cloud, Docker wasn’t really an option. In his blog post last week, Intini shares the code and processes he used to create a production-ready environment for scaling his application, which he now claims has zero downtime in deployment.
XMLDirector: The Case Against Docker
Andreas Jung is Project Lead at XMLDirector, an XML content management system and workflow platform targeted at supporting enterprise XML environments with tools to convert publishing formats and managing document collections.
Two weeks ago, he wrote about how he tried to use Docker in a production environment to put specific XML-type databases into containers so they could be installed and administered quickly; to put the Plone enterprise CMS application into a container so that it could be used for demos of XML Director; and to put a variety of XML-specific databases into containers so they could be used to test XML Director’s backend against how it handles other XML database backends.
Jung was not impressed. He found typical builds were 5-10 times slower than using the shell, that several processes required restarting Docker, and that — since Docker creates multiple images and containers — that there is some “fiddling around” needed to remove copies on the system after testing.
After trying to use Docker, Jung resigns himself to going back to “old-style deployments” while acknowledging that the theory and idea behind Docker is great (but that “its architecture and implementation is a mess. Docker is completely unusable in production. It is unreliable, it is unpredictable, it is flaky.”)
Production-Ready? It Depends
Docker has seen enormous growth and an expanding ecosystem, with uptake of the containerization system amongst financial institutions, media, and other large scale, global enterprise sectors. While Docker’s container technology is rapidly becoming considered as the standard for building distributed applications, for production environments, early adopters are finding it best suited to use cases where they have already thought through how to build a microservices architecture for their application.