Last year’s VMworld conference in Las Vegas headlined the release of a cloud-native Kubernetes platform for the creation, deployment, and management of container-based applications, without VMware’s usual virtual machine underpinnings. It all happened on VMware’s stage, the demonstration featured VMware’s representatives, and it received VMware’s public blessing. So although it was Pivotal Container Service (PKS) that premiered that day, and it was Google that provided much of the infrastructural support, the event was portrayed in the tech press as a VMware production.
Since that time — indeed, just two months ago — VMware has put its own brand on a completely different PaaS platform: VMware Kubernetes Engine (VKE), launched initially on Amazon’s AWS platform. Unlike PKS, which Pivotal is marketing as a platform that customers may deploy on its own infrastructure, the public cloud, or bits of both, VMware is marketing VKE as a fully-managed service, already deployed on Amazon’s cloud (soon on Azure’s as well) for organizations looking to stage containerized applications immediately.
That VMware chose not to wait a mere two more months to premiere VKE at its own North American conference states volumes about the evolving, volatile nature of the container services market. VMware was perceived, by people being paid to perceive, as already having carved a stake for itself in the Kubernetes cloud. Yet the company might have needed a PaaS platform that would undeniably remain VMware’s, should its situation with respect to parent company and 80 percent owner Dell Technologies suddenly change — a topic about which investors continue to speculate.
Monday marks the start of VMworld 2018, where one of the underlying themes is bound to be the company’s continued propensity for executing more than one strategy simultaneously.
Picking a Course and Sticking With It
At this time four years ago, VMware’s concept of an “application container” was a construct that was already six years old, called the “vApp.” It was a mechanism for a cloud management system — in this case, vCloud Automation Center — to automatically provision the infrastructure necessary to launch an application on a hybrid cloud platform. The vApp was described as a freeze-dried application, containing just the resources the host environment would need to make the application functional — albeit within VMware’s own vSphere environment.
It was indeed a container orchestration system, except it was VMware’s take on the idea. The company knew that the data center would evolve away from a server-centric mindset, and toward an application-centered one. To its credit, VMware was reasoning this problem out well before a company called dotCloud would change its name to Docker.
For six years, vCloud and its vApps would provide VMware’s alternative approach to staging conventional VMs. But in 2014, the company began pivoting, in fits and starts, towards the emerging data center standard approach to containerization. At the VMworld show that year, CEO Pat Gelsinger introduced reporters to what the Linux community was calling “containers,” but in such a way that it went right over most of their heads. Fortune, covering the announcement that VMware would partner with sister company Pivotal towards some kind of container platform, acknowledged that the new concept was “threatening” to the company, “because it promises faster deployment and lower costs than virtualization.” But beyond that, the magazine could only speculate the technology’s main objective was “to reduce complications.”
Ever since that time, VMware has struggled with executing a product strategy that would incorporate containerization while keeping its customers on the platforms that generate license fees. First it introduced Photon Platform, giving developers an alternate method of building VMware-compatible containers while appearing to use the same Docker toolset. Photon gave containers the appearance of being wrapped in a kind of VM-like envelope. Later, VMware introduced yet another construct called vSphere Integrated Containers (VIC). While they promised to be Docker-compatible, these VIC constructs would present the appearance of a VM to the hypervisor, making them equal citizens with VMs in vSphere.
Last year, VMware made yet another strategic divergence, appearing to decide that the platform it must maintain for the sake of its customers was not vSphere but NSX, its network virtualization platform. It’s NSX which encompasses VMware’s broader view of the hybrid cloud, giving data centers a method for abstracting diverse segments of infrastructure, including on-premises and public cloud resources, on a single plane. Both the company’s joint PKS announcement with Pivotal and Google and its joint vCloud on AWS announcement with Amazon (the precursor for 2018’s VKE) would provide means for staging containerized workloads, with Kubernetes orchestration, on clusters that shared an NSX network namespace.
For VMworld 2018, the company finally needs to make some strategic steering decisions. With so much competition already crowding the Kubernetes PaaS space, it can’t afford to try one more alternative method just to see if enough customers like it better. If it settles on the objective of embracing Kubernetes while corralling its customers inside NSX, it cannot, at the same time, be “doping” containers (as I overheard more than one VMware customer putting it) to make them compatible with vSphere.
Both IT operator customers and investors will be looking for definitiveness from VMware during this year’s show. Instead of trying six approaches and hyping the benefits of having choices, it needs to chart no more than two courses for the evolution of workload staging. Yes, whatever course or courses it chooses will involve keeping customers on one or more of its platforms — that’s where its revenue comes from. VMware won’t suddenly become altruistic.
At the same time, advocates of a vendor-neutral, open source approach to workload staging have to appreciate the unavoidable fact that VMware technology hosts a substantial majority of the world’s workloads. If the world at large is to migrate its production software to containerized environments, VMware must be involved in the process — otherwise, it just won’t happen.
Converging the Hyperconverged
Among amateur observers, there doesn’t appear to be an obvious difference between containerization and hyperconverged infrastructure (HCI). You may have read about HCI. HCI enables data center components to present the resources that they host — their compute, storage, memory, and in some cases, network fabric — to a centralized controller. There, each of those resources is incorporated in its own pool, and those pools are treated as commodities. This way, an HCI system perceives computing power in terms of slices of total capacity, rather than individual processors or servers.
Although HCI is considered the hallmark of the “software-defined data center” (SDDC), ironically, it’s becoming more about the hardware gaining more awareness of the other hardware around it, and figuring out how to work with it in tandem.
VMware has been heavily involved in HCI since its inception. At the outset, HCI promised a way for a platform management system to provision exactly the resources that each VM would need, from whatever components are capable of providing them. And as VMware’s sister company Dell EMC has already stated for the record, one of its goals for HCI is to ensure that, as customers adopt it (same song, second verse), it keeps customers operating within the same vCenter environment that it had before.
Already, you probably see the issue we’re driving at: If these same customers are also moving to containerization, and if it’s VMware that helps lead them in that direction, then will it find itself on both sides of a tug-of-war against HCI — particularly if it completes its full-on embrace of Kubernetes? By design, HCI is supposed to manage resources for all workloads, not just the VM-based ones. In practice, organizations adopting HCI today still rely on VMs as staging environments for their Kubernetes clusters — which automatically renders them useless for future experiments with microservices.
When we asked VMware officials last year whether the convergence between HCI and containers was happening, we were told they’d get back to us on that. Well, now it’s next year, and quite a few more people are waiting for an answer.
In a surprising number of sessions on the VMworld 2018 docket, and in more of the accompanying literature than we expected, we found evidence of growing interest in service meshes, particularly with Istio. As you’ve seen in The New Stack, a service mesh is a way of leveraging the principles of software-defined networking (SDN) to steer functionality in much the same way that network engineers steer session traffic.
By effectively making NSX its central platform, VMware has redoubled its bet on networking as a way of managing services. So at one level, it only makes sense that its customers and partners, if not yet the company as a whole, is becoming more interested in Istio and service meshes.
At another level, however, it’s a signal that perhaps the company is readying itself to take the full plunge into embracing microservices and network-scalable architectures, even if it means abandoning some of the VM-based technologies that may not survive the dive. We expect to learn much more about VMware’s stance with respect to service mesh architectures, and in the process, gauge its readiness to end the experimental phase of its relationship with containerization.
Stay in touch with The New Stack for more from VMworld 2018 in Las Vegas.
VMware is a sponsor of The New Stack.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.