HashiCorp: New Tools to Manage Containers and VMs, So Where Does That Leave Pets and Cattle?

The road to containers has many turns for companies that have traditionally used virtual machines for managing infrastructure. These are companies that are experimenting with containers. They know the benefits of containers but they want the security of a virtual machine environment until they find their footing and can take advantage of the density and lightweightness that comes with immutable infrastructure.
What’s apparent is this new form of infrastructure needs different types of tools that can manage different environments effectively. For example, a container that is used in test and development could also be used in production. But there are certain risks that come with an approach that removes certain checks or roadblocks if you will.
Still, there’s a comparison for how to treat virtual machines that is similar to containers. It can be unsafe for developers’ VMware virtual machines to be carried through to production. And similarly so, there’s a risk that comes if containers are carried through. “We don’t recommend that developers build Docker containers that they use in production,” said Kevin Fishner, director of sales and marketing for HashiCorp, in an interview with The New Stack. Fishner believes t If a developer doesn’t realize there’s a vulnerability, and they build a Docker container with a wrong version, that’s extremely dangerous.”
Last week, HashiCorp released its Atlas ALM tool, which it categorizes as “infrastructure management” that can be used in these worlds where both virtual machines and containers co-exist. HashiCorp already produced a tool called Packer which creates VM images for deployment to the AWS public cloud, as well as to OpenStack private clouds and VMware vSphere. Packer has recently been adapted for Docker containers. Now, Atlas automates the use of Packer to produce containers that can then be monitored through Consul, HashiCorp’s service discovery tool, thus enabling hybrid environments of VMs and containers.
What Atlas does not do is utilize Packer to produce a persistent container that transcends lifecycle phases, and Fishner is an active opponent of such a course.
“We think that the golden image that you’re putting into production should be driven by an operator, someone who is essentially managing that,” he tells us. “However, if you do feel, as a developer, you need to make a change to a Docker container, you can certainly do that locally and then submit a pull request to change the Docker configuration. At which point, the operator team — or however you organize your company — can review that pull request.”
If that request is approved, Fishner goes on, then the container is rebuilt at the production level, and then phased back toward the developer level for consistency. While developers should be given the authority to build containers, he says, golden images of any kind of virtual machine should be centrally stored and managed by an independent operator. This way, when an API enters production, everyone from developers to customers can be assured they’re using the same functions.
A Matter of Scale
There are different schools of thought on this issue, being brought face-to-face by the sudden rise of containerization. In systems built on Apache Mesos, the orchestration system inspired by Google’s Borg, old and new versions of containers co-exist in production systems. This is done on purpose, as Twitter engineer Bill Farner told us some months back, in order that behaviors of new code can be examined carefully, and that updates can be rolled back if behaviors degrade.
There are various shades of complexity that come with these new environments. For example, with multiple versions, a developer could use Packer to produce them locally, but keep them safe within the development sandbox.
An unchecked development-to-production workflow for containers could lead to a situation where one active container utilizes a patched version of OpenSSL, and another an unpatched version, Fishner said. Only a properly managed deployment environment ensures that the most recently patched version is in use throughout the production phase.
Recently, Microsoft started using the pets versus cattle analogy, symbolizing the difference between how an administrator optimally treats a virtual machine image, and how she treats a container image. At DockerCon, advocates of a complete wave of change for the data center argued that administrators should learn to treat containers as ephemeral, that they should stop bestowing them with reverence and pet names, and instead see them as temporary delivery units for small quantities of functionality.
HashiCorp has a more contextual view of the ecosytem. For example, it may make sense to keep the database running, keeping it a pet, while the application itself might be abstracted into multiple services and be more ephemeral, like cattle.
“We believe that VMs and containers should be treated the same,” says Fishner. “The way we’ve built our tools, we are completely infrastructure- as well as technology-agnostic, in the way you get your application from development code to running in production.”
The cattle versus pets analogy came up somewhat humorously at a dinner with a group of bankers that RedMonk’s James Governor recently wrote about. Governor’s view supports the position taken by HashiCorp.
Anyway I talked about microservices of course, and my theory that drawbridges are more important than moats. We also had fun talking about the cattle vs pets microservices distinction. While most cattle is somewhat disposable, not all of it is — think prize bulls…
“Additionally, we’re agnostic to the way you package your applications and deploy them to production,” Fishner said. “So if you want to be building VMs, whether those are Amazon or VMware or Google Cloud images, completely cool. If you want to be building containers, completely great. If you want to have a hybrid infrastructure, in terms of both containers and VMs — which is going to be the vast majority of people for this transition period — again, amazing, super-happy to support that.”
Here, we start to tread some shaky ground. HashiCorp asserts that VMs and containers can be managed using the same tool set.
The immediate implication of this is that both classes of delivery vehicle, if you will, should be perceived at one level as identical.
But that actually is not the implication we should draw, as HashiCorp CEO Mitchell Hashimoto states to The New Stack.
“One of the benefits of Atlas is that you can hide a lot of the complexities that containers have. But if you go down that route — which is probably the correct route to get started with,” says Hashimoto, “you do treat your containers like VMs, but you don’t get all the benefits of containers.”
Atlas is not a container-centric solution, says Hashimoto, and that fact is the product suite’s key differentiator. He describes Atlas as presenting a management paradigm that is more similar to what data centers, already experienced with managing VMs, expect to see, and meeting this expectation provides them with an easier on-ramp onto infrastructure that supports containers in co-existence with VMs.
“We made sure, when we were developing these tools, that they worked well with containers, so that whether companies are using VMs or containers, we could solve that problem. There are a lot of companies moving to containers right now for sure, and I think that’s probably the way of the future, but there are certainly a lot who are sticking with VMs for many of their core functions, and that needs to be accommodated as well. That’s what we’re trying to do.”
Barnyard Dance
I mentioned to HashiCorp’s Fishner that the emerging best practice for containerization, as outlined during the last DockerCon, involves a much finer granularity for containers than for VMs. A container may include something as simple as a single service, packaged with the minimum library code necessary to make that service functional wherever it’s transported. So the situation where two containers utilize mismatched OpenSSL libraries would be mitigated, I argued, by containing OpenSSL separately and networking it to the other containers simultaneously.
While Fishner conceded that focusing containers on individual functions may be a laudable goal, it’s not something he sees companies actually doing.
“It’s going to depend a lot on the corporate culture,” Fishner tells The New Stack, “where in these older, larger organizations, there’s no getting around having central control of building Docker containers, and making sure they’re following the right spec. I’d be shocked. In all our conversations, that’s never really a consideration.”
As CEO Hashimoto makes clear, immutable infrastructure is still one of the most laudable goals of containerized architectures, even if the route organizations take to that goal isn’t necessarily the most direct one.
“I think that microservices is an important trend. It all closely relates and ties to the goal of getting towards an immutable infrastructure,” says Hashimoto, “towards the bigger goal of deploying more quickly, deploying more safely, being able to roll back, things like that. Microservices are the way of the future for sure.”
The container management space is still rather new, and it’s sometimes difficult for participants in that space to pin down what they believe until they see it written down.
But Fishner did make this point quite clear: If an organization has no intention of switching to an all-containerized deployment environment in one fell swoop, then it will be easier for it to manage containers following the best practices of VMs than for it to alter its practices, and perhaps its culture, in order to manage VMs following the new standards of containers.
HashiCorp CEO Hashimoto echoes that sentiment.
“We certainly agree with the transition from pets to cattle,” he says. “I think there’s going to be some things that, for the near future, aren’t realistic, like treating databases as cattle — that isn’t a problem that’s going to be solved any time in the near future. But treating stateless things as cattle is very well-adopted, so I think that we try to accommodate both.”
I mentioned to Fishner that the emerging best practice for containerization, as outlined during the last DockerCon, involves a much finer granularity for containers than for VMs. A container may include something as simple as a single service, packaged with the minimum library code necessary to make that service functional wherever it’s transported. So the situation where two containers utilize mismatched OpenSSL libraries would be mitigated, I argued, by containing OpenSSL separately and networking it to the other containers simultaneously.
While Fishner conceded that focusing containers on individual functions may be a laudable goal, it’s not something he sees companies actually doing.
“It’s going to depend a lot on the corporate culture,” he said. “Where in these older, larger organizations, there’s no getting around having central control of building Docker containers, and making sure they’re following the right spec. I’d be shocked. In all our conversations, that’s never really a consideration.”
In short, from HashiCorp’s perspective, if an organization has no intention of switching to an all-containerized deployment environment in one fell swoop, then it will be easier for it to manage containers following the best practices of VMs than for it to alter its practices, and perhaps its culture, in order to manage VMs following the new standards of containers.
Docker is a sponsor of The New Stack.
Feature image: “cow and dog” by Tom Maloney is licensed under CC BY-SA 2.0.