Tuesday at VMworld 2016 in Las Vegas, we were given a closer look at the architecture and operation of the latest edition of vSphere Integrated Containers (VIC) than we’ve ever seen before, and a sharper contrast against VMware’s “greenfield” container system, Photon Platform. VIC is still not ready for prime time. Although attendees here were expecting general availability to have been declared Tuesday, that’s not what happened, although the company’s next Barcelona conference is only a few months away.
In an in-depth briefing Tuesday afternoon, Ben Corrie — the principal engineer of the original VMware’s first container project Project Bonneville, and likely the progenitor of his company’s embrace of containerization — surprisingly portrayed his own product somewhat differently than its presentation on-stage. Despite what the diagrams show, and despite the language that executives and product managers have used to describe how VIC works, Corrie told us it is technically inaccurate to say it wraps a virtual machine-like sheath around Docker containers to make them compatible with the company’s vSphere virtualization platform.
“If you think about a Venn diagram, and on the one side you’ve got vSphere and on the other side you’ve got Docker,” explained Corrie, “there is this huge intersection in the middle of all the things that vSphere does that Docker also does: network configuration, storage management, application lifecycle management. Just about everything that vSphere does, there’s some kind of overlap.
“What VIC does is, it takes everything from vSphere and then tacks on whatever Docker bits we need in order to be able to provision Docker images as VMs into vSphere.”
In a standard containerized environment, network virtualization, storage virtualization, and control plane management are all handled inside the space of Linux, Corrie told attendees. These functions essentially comprise what one would find in a hypervisor. So when they’re virtualized, “what you end up with is a bunch of nested hypervisors,” he went on, “and you have to pick one and provision containers inside of it.
“But what’s happening when you provision these nested hypervisors is that we’re duplicating a lot of what is already there. And guess what, your nested hypervisor is nowhere near as good or as mature as your actual hypervisor. So we can actually throw a lot of this stuff away. If we get rid of the container [part of the Venn diagram] and replace it with a VM, we can throw all of this away and just use vSphere infrastructure.”
In the VIC environment, Docker appears to exist in its entirety, from the perspective of the developer. But in actuality, said Corrie, “Docker is a façade on vSphere. So from the Docker client, we can now control vSphere networks, we control vSphere storage. We spin up the Docker image; it spins up as a VM into vSphere. The endpoint that you’re communicating with is actually just a resource pool into a vSphere cluster. It’s not a VM; it’s a resource pool.
Here is where Corrie’s explanation of the product whose design he did, after all, originate began to diverge from how it’s presented — a divergence which Corrie acknowledged. VIC presents containers as virtual machines, not in virtual machines, he stated plainly. The distinction is not trivial. While Docker uses images of libraries and other dependencies plucked from a registry to assemble the image of a container, VIC uses those same parts to produce the image of a virtual machine. Not a container in a wrapper made to look like a virtual machine, but a real VM, top-to-bottom.
It’s not that vSphere is fooled into believing a container is a VM. Quite the opposite: The developer may be easily fooled into believing the engine she’s using is Docker.
So why have even the keynote presentations shown vSphere Integrated Containers wrapping “C” boxes inside bigger “VM” boxes, and distributing them amid other “VM” boxes without the “C’s” inside? I asked Ben Corrie directly.
“It’s very difficult to communicate effectively,” Corrie responded. “And you’re right, it is confusing. But the distinction is very clear; the difference is very clear. And actually, there’s a way to tell which is which.”
For a system in which a container is inside a VM, Corrie explained with the aid of his whiteboard, there’s a Docker daemon, cobbled together with network and storage abstractions. “You can’t have a Linux container in a VM without all these other things in there,” he said, “which is completely unnecessary. It would be a very wasteful model, just spinning up one container in a VM and just scaling that out.”
Calling one of the components inside a VIC container “Photon” doesn’t make things any less confusing, he added.
In a VIC container, there’s an agent that serves as a shell provider. But as Corrie made clear, besides the application itself, there is very little else that differentiates the image of a virtual machine from that of a Docker container. A VIC container’s packaging is said to be completely Docker compatible, but this is because the VIC system allows the Docker client to interpret it as a Docker container.
During the Tuesday keynotes, VMware Cloud Platform CTO Kit Colbert presented VIC as a deeper foundation for containers than Docker. “Enterprise container infrastructure gives you what you need,” Colbert said, “to run containerized applications in production with confidence. While it gives developers a way to continue to use Docker Client, from the IT operator’s perspective, “it’s just vSphere.”
Outside the keynote stage Tuesday, Paul Fazzone, VMware’s general manager for cloud-native applications, fashioned an argument that made more of a direct comparison against Docker, for those organizations weighing the merits of the two on a one-to-one scale.
“What VMware gives us is the best way to run containers in a production environment today,” argued Fazzone, whose previous engineering experience had been on the NSX project. He noted that VIC utilizes its own Container Engine, which is not Docker Engine, nor does it use Docker’s runtime. This much, we’ve known, but then Fazzone went a few steps further, to demonstrate how both VIC and Photon Platform are effective delivery mechanisms for NSX.
While most of VIC’s open source components (for example, its new Admiral graphical management portal, and its new Harbor private container registry) are officially optional, said Fazzone, “the vSphere Container Engine is the mandatory component of the solution, in that it’s how we preserve, through our vSphere product, the operational model for IT.
“This should enable us to not only ensure the production characteristics of the applications you want to run in-house,” he continued, “but allow you to move these workloads quickly and easily onto the system from your dev environments.”
Fazzone clearly characterized VIC as the way vSphere will continue to be utilized in existing VMware customer environments where containerization is being introduced, over at least the next decade. Thereafter, he believes, an infrastructure much more like Photon Platform will take hold.
Yet in the meantime, he said, “most of the folks here at the show today are thinking about how they can tackle problems for their companies today.” While VIC addresses that side of VMware’s customer base, “Photon is a platform which we’re looking at the next ten to fifteen years, not just in terms of on-prem but also in terms of cross-cloud.”
The “Mischievous Little Brother”
Photon’s purpose, Fazzone explained, is to act as an abstraction for NSX and the Virtual SAN virtualized storage, collecting them together into a stack but exposing them into the container environment as a single layer, with a single API. So compute, network, storage, and security services that are actually provided by VMware’s customary infrastructure services, all appear inside the containerized sphere of influence, if you will, as “Photon.”
“Through this API, IT can partition infrastructure and create availability zones,” he went on. “Over time, this model will allow you to expand from private cloud environments into public cloud environments as well. But this API can be exposed to your development teams, so that they can go and pre-provision a pool of resources to a development team, to a tenant.”
Each development team in an organization (or, from a service provider perspective, each team among customers) can be provisioned as a separate tenant sharing the infrastructure with other tenants, although with projects that are specific to, and dedicated to, its own domain. Cloud Foundry, Kubernetes, and Mesos can be among those projects.
“Our goal is to make it very simple for IT organizations to provide that level of service to their development teams, but in a way that allows them to manage the infrastructure in a much more efficient way, much like the public cloud providers manage their infrastructure.”
I asked Fazzone whether the container format matters to Photon. He said it is VMware’s intent for Photon to support Kubernetes, Mesos, and at some point Docker Swarm. After Photon creates a tenant environment, a process inside that tenant will be able to request a Kubernetes cluster. At that point, the cluster should be able to be maintained as any Kubernetes user would expect.
Because VIC is not only based on vSphere but is vSphere, he told me, it utilizes vSphere’s existing DRS scheduler for infrastructure-level scheduling. Yet although Photon has been portrayed as a “true” container environment, Fazzone also said Photon will also have resource scheduling of its own.
“We’re going to basically pair those schedulers together,” he said, “so you’ll be able to take advantage of the best of both worlds. We’re still working on some of the details, but you can think of Photon as the inquisitive little brother to vSphere. The inquisitive little brother gets into trouble every once in a while; it’s a little more mischievous. But it’ll allow us to explore a lot of areas and, over time, help customers come up with really strong production solutions to deploy these varieties of frameworks into production.”
In a number of families everywhere — perhaps you can relate to this — there’s often one sibling who thinks he’s getting into trouble all the time, and another sibling who’s actually more clever, getting away with mischief while putting on an air of innocence. And then there’s the parents who have a hard time explaining either one.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.
Docker is a sponsor of The New Stack.