Now in Beta, Rancher Labs Runs Docker Natively in Production
Rancher Labs is releasing the beta of its management platform to run Docker containers in production. It comes after raising a $10 million round last week from the Mayfield Fund. Rancher is a reflection of a transitionary time to a more purpose-built infrastructure, designed for a new application era that still has some way to go.
Rancher offers a way to manage Docker in production. Everything in Rancher is a container. It is meant to make developing apps easier for the developer. Rancher is part of a growing trend to abstract as much of the complexity as possible for the developer, as well as the operations manager. There are continuous integration platforms, such as Shippable; while platforms such as Cloud Foundry are meant to serve enterprise customers; and Mesosphere is a data center operating system. CoreOS, a competitor to Rancher, auto-updates servers, while also offering services such as Flannel that offers an etcd-backed overlay network for containers. It integrates with Google Kubernetes, a container cluster manager. Apcera has developed a service that while at TechCrunch I described as “an autonomous system that understands the notion of who you want to talk to and how it talks to you.” In other words, the technology knows the semantics of a database and can associate it with a policy. With its universal policy-driven platform, an app can be deployed but the governance and regulatory requirements go with it and can be edited or changed when needed.
App developers have to be aware of what they are running, said Sheng Liang, co-founder and CEO at Rancher Labs. They need to know what to take adantage of. The Docker platform will get a lot richer. Apps will dictate what infrastructure capability they provide.
There is a tremendous demand for people using Docker in some form, but the infrastructure has not been transformed. In this transition from development and test to production, Docker offers a needed user experience. The way the command line works, the way Docker Hub works, the way the API works — it strikes a chord.
The Rancher beta is an open source software platform that provides infrastructure services for managing containers in production. It has several core components that cover networking, load balancing, storage management, service discovery, service management, resource management and native Docker support.
A deep part of Rancher’s core is in its systems backplane, Liang said. It’s a distributed backplane, which is a bit different than the standard control planes that are required to be resistant. In legacy systems, there are often multiple ones that need to be managed and coordinated. The Rancher team took a different approach, Liang said. It takes the requirements of Docker apps, which is a small set in comparison to the cloud or virtualized environments.
Here’s a breakdown on what Rancher is now offering, that comes from an email interview with Liang.
This is one of the technologies Rancher first developed, Liang said. Without some form of software-defined networking (SDN), the only way for containers in different hosts to communicate with each other is network address translation (NAT) and port forwarding. Service discovery, load balancing, and application blueprinting all become more difficult. For example, you not only have to discover which IP a service is on, but also what port it is mapped to. Rancher developed its own SDN technology based on building an IPsec-based overlay network. All communication is, of course, encrypted by default.
Container Load Balancing
Traditional load balancers — including load balancing services in infrastructure-as-a-service (IaaS) clouds — are not designed to work with containers. They direct traffic to VMs or hosts, but not to individual containers. Rancher’s load balancing service is designed to work with native Rancher SDN or other SDN solutions integrated with Rancher, which then directs traffic to individual containers. It is designed to work with container-based service discovery, a common way to construct microservices applications. Rancher’s container load balancing is also implemented using containers itself. It is designed to be scalable, elastic and ramp quickly to meet bursting traffic needs.
Rancher storage is implemented using the latest Docker 1.7 storage plugin framework, and works seamlessly with native Docker CLI and Docker Compose templates. It implements incremental snapshot and backup features that make it possible for organizations to run stateful applications packaged as containers. Liang said storage for containers is a particularly exciting space, and the company will announce additional product plans and partnerships down the road.
This used to be considered a very advanced application deployment technique used only by Internet-scale organizations like Netflix. Docker containers have made it possible to package small application components as containers. Many organizations still had to, however, integrate a variety of open source technologies like SmartStack, Registrator, HAProxy, Consul, and etcd and spend a lot of development effort to make service discovery work.
Rancher’s approach is to offer service discovery that integrates with the rest of the Rancher infrastructure. Rancher follows the model of service discovery made popular by Docker Compose, where each service is identified by a name and is implemented by a group of containers running the same Docker image. Services are linked together using Docker links. Named-based service discovery requires a distributed domain name system (DNS) service. When a container starts, it is automatically registered by Rancher in the distributed DNS service so other containers can find it.
In a large-scale microservices deployment, where a production application may involve dozens or hundreds of services running, upgrade is a huge challenge. It is practically impossible to deploy, test, and upgrade the entire application all at once.
Organizations like Google and Facebook started to perform continuous delivery, where one or a small group of services are upgraded at a time. Each upgraded service will go through an intermediate stage where they will be deployed into the real production environment and can utilize other services, but they will not be used by other services or handle real work. After the new services pass functional and performance tests, production workload is switched over to the new services. If the new service breaks, production workload can be switched back to the old services. The old services will be deleted when the organization is confident that the new services are reliable.
To support microservice upgrade, Rancher enables a user to clone an existing service. The cloned service can have a new version of the software, and is still wired to all the other services the existing service depends on. So the user can test the cloned service in isolation. When the user is confident the cloned service works, production an be switched to traffic from the existing service to the cloned service.
Native Docker Support
Most infrastructure providers, including IaaS cloud operators, virtualization vendors like VMware, and physical server vendors, are developing a Docker Machine plugin so Docker users can programmatically provision Linux servers with Docker already running. By integrating with Docker Machine, Rancher users can select the cloud provider and provision resources on that cloud to run their Docker containers. Docker Machine enables Rancher to accomplish all this without having to do API integration with many cloud providers.
Rancher supports other container environments, but from a product perspective, the company decided to focus on Docker, which has the most significant real user adoption. User feedback will determine if Rancher supports other container technologies.