VMworld 2016: VMware’s Virtual Infrastructure for Every Possible Workload

2 Sep 2016 8:15am, by

In VMware’s world, developers and IT operators are not converging, not collaborating, and not coalescing into a single acronym.

“Computing and processing capabilities are like a gas law,” stated VMware CEO Pat Gelsinger in his opening keynote at VMworld 2016 in Las Vegas this week. “They expand to fill the space available to them. IT is now leaving the nest of the technology department, and the cloud and IT has now permeated every aspect of business.”

As a result, he pronounced, “traditional systems of IT are doomed to fail.”

160830 Pat Gelsinger 02 (Day 1 keynotes)

Escaping doom, as VMware executives and representatives presented it this week, is a matter of adopting what they described as a “transitional architecture,” a kind of virtual infrastructure capable of supporting whatever class of workload may come along.

In this view, the ESXi hypervisor is no longer the star of VMware’s show. That pedestal is now fully occupied by NSX, the company’s virtualized network infrastructure provider that (today) facilitates both hypervisors and containers.

I don’t know, of any of my startup friends, anybody who gets rich off of selling Google software–Guido Appenzeller

But even with its continuing introduction of vSphere Integrated Containers (VIC), VMware unwaveringly portrays containerization as an experimental technology applicable only to a minority of use cases. As a company, it does not perceive containerization as an inevitable trend, and some even refused to project it as ever becoming mainstream. During one session, Ajay Singh, the general manager of VMware’s cloud management business unit, likened distributed processing trends to the creation of an inhabited Mars colony: indeed, the pinnacle of human technology at its best, but certainly not intended for an entire population.

160829 Ajay Singh 01

“This is where it’s heading, in terms of an architecture that supports microservices-based solutions — highly distributed environments across multiple clouds, Mode 2 IT, highly scalable, tens of thousands of containers or different elements in it,” described Singh. “And you might have a small segment of your development community playing with that — maybe five percent, ten percent, doing some really bleeding-edge work.”

The typical private cloud architecture in data centers today, where the virtual machine rules the roost, as Singh portrayed it, represents “Generation 2.0” on an evolutionary scale. “Generation 3.0” represents the ideal state of hyperscale, containerized architecture. But the agenda for moving data centers from one state to the other mandates, as he described it, a transitional architecture — a “Generation 2.5” — on the order of the SpaceX Falcon rocket (a metaphor he invoked, unfortunately, a few days before a Falcon rocket blew up on the launch pad at Cape Canaveral).

As long as we’re talking about space programs, NASA has always had target dates in mind when implementing any kind of technology plan. VMware may also have something resembling an evolutionary agenda. And one similarity it has with the Apollo program is that it, too, deals with decades.

It will be up to the container ecosystem to resolve the orchestration dilemma for itself.

“Photon is a platform which we’re looking at in the next ten to fifteen years,” said Paul Fazzone, VMware’s general manager for cloud-native apps, during a session on Tuesday. He’s referring to the company’s own Docker-compatible, all-containers system. Unlike VIC, Photon is not designed to make VMs and containers co-exist, but rather to provide a fully containerized system that also relies on NSX for its virtual network.

“VSphere Integrated Containers… is a platform that allows you to deploy any type of work with huge partner integration, huge services integration, very broad feature set, used by — I would assume — everybody in this room to help support their production environments,” described Fazzone. “VMware views it as the best way to run containers in a production environment today.

160830 Paul Fazzone 02“Photon Platform is much more focused. It is not going to give you all the bells and whistles that vSphere delivers. It’s going to be very focused on helping customers optimize and automate their container-based environments. It’s going to reach fairly deep into the different container frameworks, so that we can allow those frameworks to spin up in a multi-tenant environment, to deliver big, large enterprises that have multi-faceted development groups — to give them the tools that they want. And largely speaking, it is designed to completely automate IT.”

In an interview, I asked Fazzone whether he foresaw any time period — even a general one — where VMware would find it safe to assume that the VM format for staging workloads would be obsolete.

“I think containers and VMs are going to be very relevant to how customers build applications five years from now, ten years from now,” responded Fazzone. “Will the mix shift? I certainly expect so. I expect containers to grow in popularity as more enterprise customers embrace modern applications development methodologies.

“But at the same time, even in those modern development methodologies, in many cases, the components of those microservices-based applications are a mix of containers and VMs today, depending on what a particular framework requires or dictates.  Some things run better as VMs; some things, more stateless applications, run better as containers. We want to offer support for the range of workloads that customers want to use, and keep bringing them back to consistent infrastructure services, so they don’t have to make an infrastructure tradeoff one way or the other. They can embrace the workload type that is best suited for their application and their application framework.”

Infrastructure is Not a Core Competency

If some data centers are moving toward a hyperscale architecture, then further down the road, when that same customer wants to embrace a more highly distributed workload, won’t it eventually become time to replace the network infrastructure to make staging that workload feasible?

In a separate interview, I asked VMware Chief Technology Strategy Officer Guido Appenzeller about the role NSX would play, if any, in deployments of hyperscale applications — the kind inspired by big players such as Google and Netflix.

160829 Guido Appenzeller 02 (press conference)“In a sense, it’s not that Google isn’t building NSX; they’re building their own,” responded Appenzeller. “Traditionally, hyperscale builds its own applications. I don’t know, of any of my startup friends, anybody who gets rich off of selling Google software.

“If you’re operating at the scale of Google or Netflix, it almost always makes sense for you to develop this in-house rather than buy it externally because you can customize it to your needs and you have the scale — from a money perspective, it’s worth it.  The enterprise is different. Enterprise does not have, as its core competency, system-level software development. They may have high-level software development — even a bank today, you can argue, is an IT company — but it’s the high-level apps where their competency is.”

Appenzeller told the story about a major bank that developed its own distribution of OpenStack. It kept building onto its own previous layers, like the trunk of a tree, until eventually, it needed to hire its own kernel programmers to maintain its own work. And it couldn’t find any, because how many enterprise-class kernel programmers could there possibly be?

“The enterprise doesn’t have this hyper-massive scale where it makes sense for them, as a good ROI, to develop this internally. That’s why we exist, as software vendors.”

It is a devastatingly honest admission. It also points to the clearest dividing line of all between the domain of hyperscale architecture where containers abound, and the general enterprise where the need for integration and compromises lead to something comparable to the SpaceX Falcon.

Three Big Answers

That being all too clearly understood now, we can look back at the three questions we asked prior to the start of VMworld 2016, and discover the least ambiguous answers I’ve encountered in over three decades of covering technology conferences:

1. What will eventually become the universal scheduler for data center workloads? Whatever the answer may be, VMware’s brand name will not be on it. VMware does not really want to be in the scheduling business. NSX is designed to provide an underlying virtual network for whatever workloads may require it. VSphere was a VM management system, but VIC expanded it to include a type of container — not a Docker container, but a container. Photon will address those organizations where VMs are not a factor. Both VIC and Photon are NSX delivery mechanisms.

And for VMware, that’s what matters. It will be up to the container ecosystem to resolve the orchestration dilemma for itself; meanwhile, vSphere will manage the launching and management of the VIC engine and container hosts, and visibility into the VIC processes running in those hosts. From VMware’s perspective, none of these processes are actually “orchestration” in the way Kubernetes, Mesosphere, or Rancher consider it. And VMware is perfectly fine with any of those systems taking the lead in its native department. As for some kind of universal orchestrator, similar to what Intel is planning with CoreOS and Mirantis, as of today, VMware wishes them the best of luck.

2. Can VMware produce virtualized storage that meets the needs of all workloads? It is indeed aiming for this goal. Appenzeller believes that the types of overlay networks that container platforms build for themselves may be innocuous when those overlays are themselves contained to just a few nodes. It’s the scalability that’s the problem. And in the enterprise installations he’s seen, he told us, those overlays do not scale well at all.

Christos Karamanolis, VMware’s chief technology officer for its Storage and Availability Business Unit, framed the issue a bit more boldly. “We see a lot of ad hoc solutions around data persistence,” said Karamanolis, “and by that, I mean, availability of the data, high availability, and performance/quality of service.”

Customers tend to build stateful applications because they’re built around databases — from this CTO’s perspective, this is the way things have always been and will continue to be. In the absence of a unified solution to the persistence problem, not just for containers but older data warehouse architectures dating back three decades, he said.

“This has not been such a major problem because all these cloud-native applications have been developed in big, Silicon Valley companies — the Facebooks and the Googles of the world,” he told us, “who have lots of PhDs doing distributed systems, and they know exactly what they’re doing about it. But we do see — and I’m not surprised — many of our mainstream customers, who do have some development resources, do not have all the expertise that is required to build yet another scalable, distributed, fault-tolerant application.”

While VMware offers its Virtual SAN (VSAN) as a system that resolves the persistent database issue for stateless containers, once again, that system delivers NSX — and with NSX comes VIC. Yes, it can meet the needs of all workloads, if those workloads can themselves meet all of the above needs.

160831 Pat Gelsinger (on the Gelsinger-mobile)3. Will VMware drive cloud-native application development in 2017, or concentrate on core infrastructure? Absolutely, positively, the latter. VMware is an infrastructure delivery company. Throughout VMworld, it addressed cloud-native apps developers in the third person, and IT operators in the second. It offers a growing variety of options for extending the serviceability of its key product, which this week was shown not to be a hypervisor but a network virtualization platform. From the perspective of today’s VMware, if you’re interested in cloud-native apps, talk to Pivotal.

VMware (today) appears to be a company that has come to the realization — much sooner in its lifecycle than did Microsoft — that it cannot be all things to all people. But if all data centers need one thing the same way that all people need water, then perhaps now is the best time for VMware to invest everything it’s got into facilitating every possible delivery mechanism — even if it’s just for the five percent of us who will need that mechanism in fifteen years’ time — for that one thing. VMware is betting that one thing is not a container.

160831 Worked Fine in Dev

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Unit, Mirantis, Docker.