VMware built the foundation of its business on making workload portability viable. But in the last few years, the concept of “workload” has been refined. The part of a virtual machine that behaves as though it were a machine, introduces too much overhead. Without it, a workload may be easier to maintain — assuming we get our container technologies to agree to work together on the same page.
What is VMware’s interest in accelerating that future state of affairs? When in the previous decade, Microsoft was faced with the reality of the world’s everyday computing tasks moving to smaller, more versatile, platforms, it responded by working to make smartphones work and look like PCs. Twice, its efforts were soundly and unequivocally rejected by consumers. VMware’s strategy, as demonstrated this year, carries a few of the hallmarks of Microsoft’s former “embrace-and-extend” philosophy: welcoming containerization, but paving a path of integration, making the case that the old and the new can and must co-exist.
As the market leader in virtualization, VMware does possess the resources and marketing skills to dictate the pace of change in that market. And now with the backing of new parent Dell Technologies, the company more easily demonstrates its ability to change the game in single moves — as shown last week with its intent to purchase SDN platform maker PLUMgrid. But is VMware in a position to dictate the terms and the timetable with which its enterprise customers — the markets that Docker, Kubernetes, Mesosphere, and the others in the containerization space have yet to crack — adopt containerization, automated deployment, and continuous integration?
We danced around this issue in a recent, wide-ranging interview with VMware Chief Technology Officer for Cloud-Native Applications Kit Colbert. Then, when the time was right, we jumped right into it.
We began at the outskirts, by asking Colbert whether he perceived the ideal of serverless architecture as embodying the true, fulfilled, definition of cloud-native applications.
“When I think about a cloud-native app, it’s a little more broad than that,” Colbert responded. “Partly, it’s the consumption model. Containers are part of that; serverless can be part of that. Part of it is the architecture of the application itself, and we see a lot of distributed, microservices architectures.
“The initial use case for containers today are stateless workloads. Managing state within containers is getting better, but it’s certainly not to the level of maturity that I think a lot of people would feel comfortable with.” — Kit Colbert.
“But I think the third one is this notion of how the app is built — this notion of continuous integration and continuous deployment. I think those three things, for me, are what define ‘cloud-native.’”
Colbert acknowledged the importance of consumption models, separately from creation models, in defining the cloud-native space. Virtual machines (the first generation of hypervisor-driven VMs, championed by VMware) greatly simplified those consumption models, he argued. Then containers further simplified some aspects of those models, he continued, including tightening the typical software development lifecycle (SDL), though he stopped short of ascribing similar achievements to containers as to VMs.
“Serverless is the next logical step there,” the CTO went on. “Saying, ‘You know what, I don’t want to deal with any of the binaries and components outside of my application. I really just to set up my application as a function.’” He then acknowledged that the serverless ideal could go on to decompose applications into individual functions that developers could assemble into cohesive frameworks.
If VMware is essentially an infrastructure company, then it’s noteworthy that one of its chief architects has acknowledged that its own core products fit within a category of components that its customers would prefer to care less and less about. If the serverless ideal succeeds, the infrastructure is successfully cloaked behind the dark curtain of the command line.
Before you go thinking that VMware is in a unique position with such a dilemma, realize that Docker Inc. and CoreOS face exactly the same issue.
Colbert did not, however, establish serverless architecture as specifically an outgrowth of containerization. He had that opportunity, though he held open the possibility of a link between serverless-ness and VMs. So we pressed the issue a little further: Is there any class of workloads which he would perceive as more suitable to be run in containers than in VMs? Put another way, is there any class of workload that would be worse off after becoming containerized?
“The initial use case for containers today are stateless workloads,” he responded. “Managing state within containers is getting better, but it’s certainly not to the level of maturity that I think a lot of people would feel comfortable with, running those things in production.”
Stateful, database-driven workloads remain largely the purview of VMs, Colbert believes. But the arrival of container-centric storage management systems will change that, he said, and stateful workloads will join the container migration along with stateless.
“Certainly on the long-term horizon, it’s hard for me to imagine workloads that would not work inside of containers,” he told The New Stack. “I think what you’re going to start seeing, though, is a need to understand not just how the workload is running inside the container, but how it utilizes the underlying hardware.”
This is where Colbert’s case shifts towards the ability of the hypervisor to link workloads safely with the hardware that hosts them. By “hardware,” he broadened the definition beyond just processors and storage, to include the growing set of FPGA accelerators and GPU accelerators — auxiliary processors that assume many of the common, repetitive functions that would otherwise bear down on CPUs. The challenge facing developers today, as he explained it, is how best to operationalize these more deterministic workloads.
More by virtue of circumstance than design, VM-based applications probably do have an edge today with respect to facilitating the libraries that hardware-accelerated apps require — for GPUs, an impressive number, by Nvidia’s count.
But this circumstance, Colbert admitted, is temporary. “If I imagine long enough out, I don’t see any reason not to do containers,” he admitted.
“That assumes you’re starting from a greenfield, blank slate, where you can go one direction or the other. But there’s a lot of these existing applications that are out there, and they’re already built in VMs, and maybe containers do catch up and have all these great capabilities. But then there’s this secondary question, which is: Is the cost of migrating that application to a container worthwhile?”
Colbert did not state that VM-based applications are not necessarily, by design, cloud-native. Yet his line of reasoning did imply that older applications developed prior to, or without deference to, cloud platforms may not necessarily benefit from migrating to a container-based system, to a greater extent than executing the migration would actually cost.
Conversely, Colbert argued that since cloud platforms are essentially hypervisor supported anyway, it would be wrong to presume that a cloud-native app must be necessarily container-driven. Just as many institutions continue to run 30-year-old applications, or older, on mainframe platforms, he said it was a “reality of enterprise IT” that enterprises will continue to undertake cost/benefit analyses for every potential migration, and may yet conclude that some business processes will be too fragile to be moved.
“I’m not dogmatic one way or the other; I really want customers to do what’s best for them,” he said. “Our goal at VMware is supporting them in that journey.” With his company’s vSphere Integrated Containers platform, he said, the class of customer that concluded a full migration was out of the question could still get what he calls an “80/20 benefit:” “Maybe improving their software development lifecycle, simplifying some of the dev/test environments, while at the same time still getting the operational benefits of VMs.”
It’s an argument that suggests the CTO believes VMware will be in a position to mitigate the pace of architectural change, at least among its core customer base: to moderate architectural shifts and maintain the fragile workloads that must be maintained under special conditions.
“I’m a technologist, and I love the cool, new technologies,” he told us. “But you’ve got to be pragmatic about it too.”
We asked Kit Colbert a typical end-of-year question: At this time twelve months from now, what would he expect to be discussing as a “reality of IT” that’s not being discussed so much today?
“One of those things where I think history repeats itself,” he responded. “If you look at what happened with virtualization, when VMs took hold. . . you had this issue of ‘VM sprawl.’ When people too easily create them, they create a bunch of them, and people weren’t really tracking them.
“I wonder if something like that’s going to happen with containers. Right now, people are creating them, and it’s great. But I wonder if people will start getting into the issues of ‘container sprawl.’ There’s containers everywhere, and people go crazy creating them, and they’re all over the place. You don’t really know who created them or where. I feel there’s going to be this next level of maturity that will have to happen, and discussions will have to start happening. How do you really manage that? How do you enable the speed containers can offer, but at the same time, have some notion of control?”
It’s the kind of prediction one might expect from an organization that still considers containerization a trend that customers fear rather than embrace.
CoreOS and Docker are sponsors of The New Stack.