DCHQ: Automating Container Deployment in Similar Ways to VMs

Enterprises are accustomed to deploying applications as images, but in a certain controlled, monitored, perhaps regulated, way. They are used to dealing with application lifecycles. Docker is a “movement” (which Arlo Guthrie once defined as 50 people a day randomly breaking into singing its theme song), but the established order of virtual machine management is clear. The docker push
command is not overturning IT, but there is without a doubt a two-factor type of change happening. Legacy infrastructure is here to stay running the big systems. But on the frontend is a real shift to Docker and new forms of container orchestration.
Considering the changing dynamics in the enterprise, DCHQ is seeking to implement a kind of deployment automation platform that provides governance, deployment automation and lifecycle management for container-based applications.
Using DCHQ, an admin can group resources that containers may require during composition, into access clusters. Permissions for provisioning may then be assigned to those clusters. Then IT may apply post-provisioning controls limiting developers’ access to the production domains in which containers are deployed — for instance, so that developers may perform the monitoring, orchestration, and updating processes to which Docker users have already grown accustomed, as well as diagnostics and continuous delivery using Jenkins CI.
Access Control
“When containers were introduced, for a lot of the developers who have existing skill sets, or ways of thinking about building applications, it was just a major shift that they couldn’t swallow quickly,” says Amjad Afanah, CEO and founder of container automation platform provider DCHQ. “What we are trying to do is help enterprises containerize applications with minimal effort.”
As Afanah explains, “The traditional IT shops still want to have IT or the central DevOps teams build the standard container images, and then have developers deploy their code on top of these images. This is where we introduce the bash script plugin framework, that allows you to customize containers at runtime and also post-provisioning.”
It’s a simple enough concept for automating the container creation process, but also converting it into a series of steps that are roughly analogous to the steps taken to create VMs. Security control frameworks may be applied to these steps in much the same way as before.
At its last VMworld conference, VMware discussed how it plans to implement the security features it says containers lack: by wrapping containers in VM frameworks and deploying them alongside other VMs in vSphere, where existing controls are already enforced.
If baking a cake were mainly about putting each ingredient into an individual cup and then setting all the cups inside a refrigerator stored inside an oven, the effect might be much the same.
True containerization is about the deployment of microservices that can scale up or down effectively. DCHQ’s approach to handling the “missing” security element seeks to maintain containers’ architecture and identity, leaving the possibility of microservices enabled. The bridge between worlds, if you will, is made up of “plugins” that, in DCHQ’s case, are wrappers around bash scripts that both developers and IT professionals should recognize easily enough.
“The definition of ‘governance’ is very broad,” Afanah warns us, “and we don’t claim to offer everything around governance for containers. What we do offer is the access controls that are needed for IT to start enabling developers to basically unleash the agility of Docker, to accelerate app development through our platform.”
The typical Docker file already has a script attached to it, with simple instructions for how its associated container is to be composed. That script contains commands for requesting resources from a registry, such as Docker Hub, but more often an organization’s private registry.
In a conventional IT environment, such requests have controls attached to them. By “controls” in this context, I don’t mean segments of code, but rather processes which IT managers oversee, especially for implementing access controls to corporate resources. Scripts are generally executed with the privilege level of the user accessing them, and if that’s a developer, that privilege is usually very high with respect to the developer’s native domain. For such scripts to be useful to developers in production domains, IT may have to provide SSH access to a limited command line. For some organizations, that’s simply not permitted.
“If you’re able as a developer to do everything that you need to do with container deployment, without getting access to the actual host, then you shouldn’t need to worry about that,” says Afanah.
Change Without Disruption
As Amjad Afanah candidly admits, DCHQ is not yet spreading like wildfire.
“To be honest, early adopters are not like the typical enterprise IT shop that hasn’t attempted even virtualization,” he remarks. “Those are not sweet-spot customers for us, because you first have to sell them on Docker — or even virtualization in general — and then talk about how you can accelerate app development, eliminate shadow IT, and all that stuff. Typically, our customers are these pockets of innovation that you see in these very large enterprises. These guys already have some kind of virtualization running — either OpenStack or vSphere in most cases, CloudStack sometimes.
“What we’re trying to tell them is, you don’t have to have a complete adjustment or disruption of the way you’re doing things,” continues Afanah. “You can basically unlock the power of Docker using the existing solutions that you have for virtualization.”
It’s no secret that the fine art of containerization, upon which much of the foundation of The New Stack has been built, was conceived, produced, and delivered by developers, for developers. Even today, Docker’s demos feature a developer assembling the container components from both original and gathered code, deploying it to a platform, and running it from there.
“There is no other purpose for computers than to compute your software,” said Docker Inc. CTO Solomon Hykes last June 22 at DockerCon.
Enterprises may take issue with that point of view. Of course, Docker, CoreOS, Weave, ClusterHQ, and essentially all the leading open source contributors know that the platform they would seek to replace is managed by the IT department — specifically, by people whom sources on the outside tell me don’t read this publication, and probably haven’t even heard of it.
Some of those who have, go so far as to equate using Docker with the practice of “shadow IT.” While that may sound like a drastic conclusion, keep in mind that in most organizations, “shadow IT” is the act of practicing information technology without a license from the information technology department. That fits the model of Docker’s deployment demos surprisingly well.
This is a serious problem — for containerization, for us, and maybe for you, not just because of the perception of containerization by some as a conspiracy against the IT department. Organizations have an obligation — perhaps voluntary, possibly legal — to remain in compliance with safety and security controls.
You can’t change all the technology in an organization the way you can change the entire menu of a restaurant. The best ideas for change are those that work within the system, not in spite of it, and still end up improving it.
CoreOS, Docker and Weaveworks are sponsors of The New Stack.
Feature image: “Ancona, Marche, Italy – Containers” by Gianni Del Bufalo is licensed under CC BY-SA 2.0.