While great for development, containers introduce a range of new configuration complexities when pushed into production, so argues IT automation software provider Chef. To this end, the company has been promoting a packaging format and associated runtime, called Habitat, that it claims will streamline the process of assembling and deploying distributed applications.
Habitat provides the easiest possible way to containerize applications for production usage, asserted Chef CEO Barry Crist, during the company’s yearly ChefConf, held last month in Austin, Texas. It addresses a set of lifecycle problems that can not be solved by orchestration tools alone, providing for a degree of deployment and updating automation that may not require a radical reconfiguration of the infrastructure.
In packaging mode, Habitat bundles all the dependencies that an application would need from an OS, thereby eliminating the need for the OS itself (it works not only for containers, but for other formats as well — virtual machines, Mesosphere’s Data Center Operating System, RPMs, and bare metal). And the runtime automatically provides all the supporting infrastructure needed by the application, determining if the app is based on Node.js, Java app or on some other environment, and automatically providing the latest versions of all the supporting libraries for that environment.
When it comes to containerization, the company whole-heartily agrees with the practice of immutable infrastructure. But while the app should remain immutable, Habitat’s creators argue, the configuration supporting the app must remain flexible.
Chef introduced Habitat to the world at ChefConf last year; This year, the company launched a number of new tools to make to fit it into daily operations, including a set of templates to stand-up popular applications such as Redis, as well as a Builder service to provide up-to-date commonly used supporting libraries.
At ChefConf, one of the creators of the software, Jamie Winsor, demonstrated how to stand up a multi-tier Ruby-on-Rails application with just five lines of Bash code written for Habitat. Habitat detected that the app would require the latest version of Ruby-on-Rails, so it downloaded and installed all that software and all its dependencies. It also detected that the app would require PostgreSQL, and downloaded that too. The runtime ensured that all the components could find each other and the app was ready to run in under two minutes.
— The New Stack (@thenewstack) May 23, 2017
The Learning Cliff of Containers
“In development, containers seem really cool but when you try to deploy in production that’s when you get into this learning cliff,” noted Michael Ducy, Chef director of product marketing, during one of the sessions at the conference. There are issues to contend with dealing with security, networking, storage, and others.
One of the largest issues certainly is vulnerability management. A recent North Carolina State University found that each hosted image in Docker Hub contained an average of 153 vulnerabilities in the community images, and 76 in official images. Most had high severity vulnerabilities.
A chief source of these vulnerabilities has been the OS components packaged within containers. This reliance on the OS was what Habitat was designed to circumvent.
“The operating system is the source of most of our problems,” Ducy noted, pointing out that 75 percent of containers hold a full OS. In many cases, these containers will not get updated once out in production. Even if a shop adheres to the philosophy of immutable containers, in which the running container gets updated but replaced, it may not have a workflow in place to trigger a container update should a new vulnerability pop up in one of its libraries.
“Are we using containers the way we are supposed to? Probably not,” he said. “It is super easy to pull a full OS when you don’t need it.”
Also not helping matters is the practice of “lift-and-shift,” in which legacy applications are packaged in containers (or virtual machines), with no additional engineering. In these cases, management of these apps has not been simplified; in fact they have have grown more complex thanks to their inclusion into containers.
The Natural Habitat
Habitat is a package manager, process supervisor, and (most recently) build service, Winsor explained. It is friendly to declarative programming (to the point of even using Bash, the language every admin knows), and is API-driven. It is API first, in fact, Ducy insists. This is important for automating operations.
Habitat has two sets of APIs. One governs how you build your software artifacts. The other is for controlling the artifacts in production.
The idea is to be able to run a Habitat artifact on any platform. Habitat automates the cycle of installing and updating dependencies, as well as any dependencies those dependencies depend upon.
In Habitat, you define your application and the dependencies needed for the application. Habitat will walk the dependency tree for you. Then once you build that artifact you can export it. No environmental variables are shared across applications. If a dependency is updated, then Habitat will rebuild the entire package and have it ready to move into production, or whatever other channels it need to be copied over to.
“If we rebuild OpenSSL, you will eventually have your application rebuilt, because it will see that OpensSSL was rebuilt and its dependents have been rebuilt,” Winsor said. “You come in in the morning and you see your app has been rebuilt. The upstream provider noticed a vulnerability and patched it. You just have to promote it to production.”
Habitat defines different environments as “channels,” and maintains the specific configuration information for each channel. “production” could be a channel, as would “dev” or “testing.”
“The configuration of something will change as you promote it through different environments,” Ducy explained. The Habitat package will remain immutable, though the configuration can be swapped out for of different environments. Check the chart:
Contrast this approach to one seen in seasoned container deployments. “What you don’t want to have is container images for five different environments,” Ducy said, explaining that such would be the result of baking a lot of configuration info into the containers themselves.
When moved to the Habitat runtime environment, the application in effect become services. Each node in the runtime environment is run by a process supervisor or just a “supervisor.” All the supervisors gossip with one another. They are also reactive: When one supervisor connects to another, they both trade info about themselves.
Building a Habitat App
Packaging Habitat-based applications can be done from the command line within a development shell, called Studio, which runs on Mac, Linux and Windows. Studio is actually a stripped-down version of Linux, based on BusyBox. The installation is kept minimal so no accidental dependencies are loaded into the app. Users can download, install and link in additional utilities into this work area, such as netstat. Chef maintains an online public depot, from which up-to-date packages can be downloaded.
In order to streamline applications packaging, the company introduced a Habitat tool called Scaffolding, which abstracts build plans look for various languages or runtimes. It’s a bit like Heroku Build Packs. Each plan is designed to automatically figure out what the source code in your repository requires to run. Node.js, Go, Python, Ruby on Rails, and Rust are all supported, or soon will be.
“The goal is to work on one for every runtime you have,” Winsor said.
The Habitat runtime environment assumes you will be running multiple nodes, or servers. Each node gets a supervisor, which manages the apps, which are known as ‘services’ in Habitat. Every service in a group within a ring can communicate with each other, but don’t know about each other, unless you bind them together. This is for connecting to things like databases, and are specified through port numbers. In each environment, the supervisors elect a leader to handle new updates.
For each app, you need to install a router and an API gateway, which are two components required for messaging and API connectivity, respectively. The router can be configured with a client port, a heartbeat port and other ports to aid in monitoring and workflow. Every server exposes an HTTP gateway. With this, you can query it to learn what services are running, and the state of the ring, service group states, the configuration of each service and so on.
In Habitat, the user can establish different channels to run services. A dev release can be “promoted” to different channels, such as production. The supervisors gossip with each other to maintain state information. You can’t load any services until you start a supervisor. Every service that starts joins a service group.
Winsor calls this distributed ring approach “reactive” meaning that it provides a very easy way to configure distributed apps by any manner you choose simply by telling the ring to make it so. “The routers expose what I need, so I know when I build this API, they will fit that socket of the router,” Winsor said.
The supervisors bind that service to a service group as well as communicate the router information, and that info automatically gets embedded into the service.
One early user of Habitat is GE Digital’s Predix, a global cloud hosting tens of thousands of applications. In another presentation, Predix engineers described how Habitat vastly simplified many of the operations in deploying services.
When the engineers set out to build the cloud, they found that while there is plenty of software for running stateless services, not as much software existed to support both stateless and stateful. So Predix created its own runtime, called Buffet, which was built from the Chef configuration management software. Currently, Buffets hosts three services, Redis, RabbitMQ and the ELK (Elasticserach, Logstash and Kibana) search stack.
Currently, Predix does have an automated deployment mechanism for these services, though it is hampered by various complexities. One issue is the great number of dependencies needed just to support the software, such as runtimes for Java, Node.js, Python. If any of the source repositories go down, the build fails.
Service discovery also involves some trickery. A node must be registered with the Chef server before the configuration file can be written. It needs to be shipped out to all the machines, which then have to restart the services. “That’s the only way you can do [service discovery] in Chef,” said Amulya Sharma, GE Digital staff software engineer, during the ChefConf presentation. For distributed applications such as Redis, some custom logic has to be added to the Chef cookbook to establish the leader node for the app.
Configuration changes, such as an update to some software, must be written to the Chef Cookbook. Once a value has been changed, the service needs to be notified that a restart is required — all of which requires complex additions to the Cookbook.
When it was introduced, Habitat promised to simplify a lot of this work, Sharma noted. Configuration changes would be automatic and easier to stage. Package building could be much easier, as would building out an intelligent runtime environment, one that responds to the needs of a new service. Rolling updates could be staged more easily.
The team started using Habitat Plans to prepare the nodes, instead of using Chef Cookbooks, explained Deepak Sihag GE Digital senior software engineer. It has prepared plans for Elasticsearch, Logstash and Kibana. Each of these components of the ELK stack needs to know where the others live in order to work together in a single service. With Habitat, the supervisors do the configuration across these components automatically.
In this scenario, installing Elasticsearch on a machine requires only a single command. Before, the company has 150 lines of code in its cookbook to instead.
Chef Provisioning is still used in these deployments. It still works well to support “infrastructure-as-code,” Sihag said. But for configuration, Chef Cookbooks were swapped out for Habitat Plans. Integrating these components, which was still a manual process of finding servers and matching them with workloads — is now automated by Habitat supervisors. And before, Chief Client was used to monitor a service to ensure they remain in the Desired State Configuration (DSC), Habitat automatically ensures each service runs the correct versions of all the software and dependencies.
Another advantage that Habitat provides is faster load times, the engineers explained. Because Logstash must be connected to Elasticsearch, the deployment must be staggered so that Elasticsearch comes online before Logstash. With Habitat, the supervisor won’t let Logstash start until Elasticsearch is running as a service. This approach cuts the totally ELK deployment time from 10 minutes to 7 minutes.
Habitat can also cut considerable time from disaster recovery operations. “Everything Habitat requires to run a service was within a particular defined space, which we could mount to an external device,” Deepak explained. When the machines went down, the stateless environment could easily be replicated elsewhere.
Chef is a sponsor of The New Stack.
Feature image: Chef CTO Adam Jacob leading the Chef house band on-stage, rocking out to KISS (Keep It Simple, Stupid) infrastructure, at the ChefConf 2017 party.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.