Puppet wants to fold the Docker build process into its own IT automation management. The new edition Puppet Enterprise flagship platform, Puppet Enterprise 2016.4, includes the ability to build Docker containers and automatically ship them into production environments.
The new edition of the software was unveiled at the company’s annual user conference, PuppetConf 2016. The software also features an integration with the Jenkins continuous integration and deployment tool, and new orchestration capabilities.
It’s pretty easy to get started with Docker, explains Puppet senior software engineer Gareth Rushgrove. He calls Dockerfile an “80 percent solution”as a build tool, but says copy-and-paste is one of the prevalent ways developers use it to reuse images.
“Then the question came up: What does that look like after you have 500 developers and several hundred services? After three years, do you wind up with something very fragmented and hard to manage?” he said.
Puppet Docker Image Build was built to provide a consistent way build and deploy containers — it mirrors what the Docker build plan does, Rushgrove says, but is focused on scaling within larger organizations. You’d still build in the same way, but use Puppet’s new tool instead of docker build command.
The Puppet language is more composable and reusable than Dockerfile, Rushgrove argued. It allows developers to create modules that can be reused, and along with Puppet’s testing tools, inserted into the continuous integration pipeline (CI/CD).
Puppet Enterprise 2016.4 provides deeper visibility into the cause of change across an organization’s infrastructure, compared to previous editions.
“People use Puppet to [state] the security and compliance policies they want to enforce and if anything changes, to let them know,” said Tim Zonca, Puppet vice president of product marketing.
“Normally we just say, ‘something changed, it’s out of policy. We’ll fix it.’ Now we’ll say, ‘Something changed, but it was intentional.’ Someone did this through Puppet. You can go in and see what they did that brought this out of policy. Or they changed something outside of Puppet. It might have been intentional or a malicious actor, but it was outside of Puppet and we fixed it. It helps differentiate between what’s going on within Puppet,” Zonca said.
That plays into its added orchestration capabilities. Due to poor visibility, traditional orchestration can lead to conflicting configurations or undocumented, ad-hoc changes. Now you can orchestrate phased deployments of change on a specific part of the infrastructure; segment infrastructure and applications based on any facts stored in Puppet, such as location, environment, and configuration resources applied, and deploy changes only to those targeted segments.
“You can run canary deployments. Say, ‘I want to run this just on these 10 nodes. If it goes well, then I want to do that across 5,000 end points,’” Zonca said.
“You could say, ‘I want to deploy this just on my web servers running Linux. Look for this version of OpenSSL and nodes that have this, I need to update with this set of changes,'” he said. “It gives you really granular control including role-based control — this person can look at this portion of the infrastructure — and target really isolated parts of that infrastructure.”
The new version of Jenkins allows users to define their infrastructure as code, though the Puppet Jenkins Pipeline plugin allows you to combine both in a much more automated way, according to Zonca.
Teams can build continuous delivery pipelines in Jenkins with Puppet Enterprise orchestrating all the applications’ deployment tasks in that pipeline. Using this pipeline, Jenkins users can set up CI pipelines, then create and push Puppet orchestration jobs to specific applications or portions of the infrastructure. Now all that can be done automatically.
The company also announced a plugin for VMware’s vRealize cloud management platform, which enables self-service provisioning of compliant infrastructure across on-prem and multi-cloud environments.
The Road Ahead
As production deployments of containers increase, configuration management vendors such as Puppet and Chef face the challenge of continuing to prove their relevance. And with Red Hat’s acquisition of Ansible last year, they’re facing its growing prominence.
Puppet’s own orchestrator, introduced last fall, extended use of its PQL (Puppet Query Language), a syntax for asking questions of Puppet, that can be used to describe services that span nodes, and can be used as the language to directly push out change and watch it happen, Zonca said. If something goes wrong, you can stop it, and troubleshoot it there, then continue to roll out.
The Docker integration technologies were the result of Puppet’s BlueShift project, which was launched in April as a “collaboration between internal Puppet engineering, our community, technology companies and their communities, to provide a way to leverage Puppet as the common language for providing and managing new technology,” Zonca previously told The New Stack.
Listen to all TNS podcasts on Simplecast.
It previously announced a module for Kubernetes that enables the orchestration environment to be managed with Puppet code; a module for managing Docker demons and containers; modules for Consul, which is Hashicorp‘s open source tool for discovering services on networks, and is working on modules for Mesos.
It’s working on ways to use Puppet with VMWare’s Photon and other container-as-a-service platforms.
“A lot of this work is just showing where we’re going,” Rushgrove said of BlueShift.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.