If a new stack is to take root in the modern enterprise, then something has to give. Not only must an old infrastructure make room for a new way of work, but the new stack must open itself up to the prospect of interoperability and co-existence with something that, at least in our frame of reference, is no longer new.
The first wave of virtualization involved taking workloads off of unmanageable physical servers, transporting them onto virtual layers, and then pooling the resources beneath those layers to make virtual machines into devices the size of planets. Well, that was Stage One. Stage Two was moving these virtual machines onto a cloud platform layer that was designed for virtualization. Now, Stage Three involves the retooling of software to become purpose-built for virtualization, so that it “lives” in this new environment, not as a refugee, but rather a native.
That puts us in a situation where we find ourselves reconciling the new stack with the old one: the support structures and context of the services we’re devising for continuous deployment and stateless distribution.
The Move Back Home
Software-defined storage firm Nutanix introduced us to a situation we hadn’t quite considered much — a part of the problem of making both of these stages interoperable that honestly has never really been discussed.
It has to do with the realization that “the cloud” is not supposed to be the final destination for all business software. Cloud platforms such as OpenStack enable mobility between virtual environments, certainly. But that mobility is supposed to be two-way. Which means, if and when the new stack truly does bring about all the efficiencies and optimizations that it promises, it may very well become not only necessary but cost-effective for businesses to move their virtualized applications back home.
“I think the move from a VM to a container is a very unnatural, and maybe unnecessary, move,” said Howard Ting, Nutanix’ senior vice president of marketing, in an interview with The New Stack. “But the movement from a VMware hypervisor to a Hyper-V environment, and then to AWS — that’s a very real requirement.”
Nutanix’ line of work is virtualizing storage, such that virtual machine environments and Docker container environments can recognize hybridized pools of storage as though they were colossal iSCSI volumes. Last week, the company announced its new Acropolis Distributed Storage Fabric, which extends Nutanix’ existing file system to encompass Web-scale storage as well. Acropolis’ goal, the company says, is to “unify all workloads on a single infrastructure.”
A New Scale for Scalability
Containers will stand to make good use of Web-scale storage immediately, because part of the whole point of containers in the first place is to eliminate all limitations upon scalability.
But having containers cohabit this new fabric with VMs does not mean either converting VMs workloads to match container workloads, or vice versa; or making both become managed by the same overlord. Although VMware is on record as wanting this, Ting seems to be acknowledging that, in reality, the best we will be able to practically accomplish is co-existence.
Such a co-existence will entail enabling customers to scale workloads back down from Web-scale into the local data center. It isn’t obvious, but applications that are purpose-built for virtual machines have a hard time with that.
Data centers tend to use AWS storage as their initial staging grounds for new software services, both in the development and testing phase, as well as during the launch to production. Once these services have matured, Ting said, customers are looking for methods to bring them back in-house. This mode of distribution is likely to stay the same for containerized workloads as well as VMs; in fact, with containers, the lifecycle may only accelerate.
“Right now, as you know, there’s no way to start it in AWS and then bring it back,” remarked Ting. One example he cites is Facebook games maker Zynga, which was believed to be Amazon’s single largest AWS customer early on, but then invested its windfall revenues in expanding its own data center. Zynga notoriously built its own “Z-cloud” infrastructure to pull its resources back in, but did so in the era before OpenStack. Some say that company is still struggling to reassemble itself and resume a normal business model.
To enable this come-and-go model of virtualization deployment, Nutanix has taken what may be a controversial step with Acropolis: the introduction of its own hypervisor.
“The hypervisor has become the new commodity,” explained Ting. “There are a lot of great options there now. Five years ago, you couldn’t say that. Hyper-V was in an immature state, KVM was lacking a lot of enterprise-type capabilities and was just emerging. But we believe that these are really very mature products that deliver most of the basic hypervisor-level services.”
Most hypervisors in use today, Ting noted, make full use of live migration and distributed resource scheduling. Acropolis’ hypervisor is an extension of open-source KVM. With it, he says, administrators no longer need to “bounce around” between admin consoles, such as between Nutanix’ Prism and VMware’s vCenter. Now, VMs can be deployed and storage can be managed through a single console.
This capability will undoubtedly be necessary, if it isn’t so already, with containers and microservices. Virtualization today is comprised of channels largely cut by major vendors. Docker’s innovation is, in large measure, steering around those hypervisors and moving virtualization into the kernel. But unless containers want to spend the remainder of their lives stuck in the sandbox phase, they’ll need some way to escape their own shell and cohabit the world along with the 2000’s model stack, the 1990’s model, and probably even the 1980’s model.
Docker is a sponsor of The New Stack.