Containers

VMware Pitches NSX Virtual Networking as the Foundation for Every Workload

30 Aug 2016 7:58am, by

If you were judging from the opening keynote at VMworld 2016 in Las Vegas, you’d get the impression that the emerging container ecosystem hadn’t fazed VMware in the slightest. Instead, VMware seems intent on following customers as they extend their infrastructure further into the public cloud. The company announced what it calls VMware Cloud Foundation, and although it’s mainly vSphere, Virtual SAN, and the NSX network virtualization component wrapped up in a nice, new package, the platform certainly looks more like OpenStack than it did last week.

But it was part of a very cleverly planned agenda for this year’s show. Not featuring container technologies on the big stage gave VMware’s executives the full opportunity to portray its existing virtual machine platform as an almost unchallenged leader in staging workloads. Only outside the keynote stage was the existence of a container ecosystem discussed at any length.

The theme of that discussion: Adopting containerization should not make a data center change the architecture of its infrastructure.

“If you think about the vision statement that we heard earlier today… it says nothing about containers or hypervisors, or where things were running,” said Raghu Raghuram, VMware executive vice president and general manager for SDDC (software defined data center), during a press conference Monday following the keynote. “The vision statement is to enable our customers to run, manage, and secure applications, whatever technology they might be using.”

The Whatever Platform

Cloud Foundation is being geared as a management platform for every class of application workload, including virtual machines and Docker-style containers, with or without vSphere integration. This management capability can be extended to public clouds, when workloads are migrated there. Policies follow workloads to the public cloud, so the same network restrictions apply when a VM is run in Azure as on vSphere in the customer’s private cloud.

During the keynote, CEO Pat Gelsinger announced an extension of VMware’s strategic partnership with IBM, giving its public cloud platform equal footing with Amazon AWS, Microsoft Azure, Google Cloud, and VMware’s own vCloud Air in Cloud Foundation’s list of support options. Ironically, this also put IBM in the position of finding a place for VMware amid the many technologies its cloud platform(s) supports – a list which, of course, includes OpenStack.

“The attack surface of the kernel is much, much higher than the attack surface of a hypervisor” —  Guido Appenzeller

“As in any market, you want to support multiple technologies,” said IBM Cloud Senior Vice President Robert LeBlanc [pictured at top right with Gelsinger], during Monday’s press conference. “We support OpenStack, we support VMware, [and] we do support Docker containers natively.  And we want to be able to give the client choice to what matches to their environment, to the skills that they have, and what they’re trying to achieve.  We’ll continue to support where the market is.”

160829 Pat Gelsinger (press conference)

LeBlanc went on to explain how its Bluemix PaaS platform lets developers target their apps to VMware, OpenStack, and Cloud Foundry, among other hosts, as part of IBM’s mission to respect customers’ choices.

“Part of the reason that we’ve seen such a rapid uptake in the customer base is, VMware has such a huge footprint inside of enterprise customers,” said the CEO. “We have very high market share, and OpenStack has very small market share. That’s been an affinity… In many cases, you have 80, 90, 95 percent of enterprises’ workloads running on a VMware stack. And now I [the customer] can have trusted, at-scale enterprise cloud partner like IBM. That’s a marriage that makes sense, to so many customers… Even though choice, as Robert said, is important, this is a strike zone offering for the bulk of enterprise customers.”

The Wherever Firewall

VMware’s new strategy with Cloud Foundation borrows some of IBM’s brand reputation to give customers incentive to extend their NSX virtualized network footprint outside their own premises, into the public cloud. The basic definition of NSX is being altered a bit. Rather than the network virtualization counterpart of a hypervisor, like ESXi, the new NSX in Cloud Foundation is being positioned as a service – literally, a management portal for defining the virtual network infrastructure for a workload, including container-based, wherever it may exist.

160829 Guido Appenzeller (keynotes)“Today, in the enterprise, I would say pretty much 100 percent of customers that I know are running containers inside of virtual machines,” said VMware Chief Technology Strategy Officer Guido Appenzeller, during a session Monday afternoon to general attendees. “When I saw that for the first time, it really surprised me.  Because I was thinking, ‘Aren’t containers replacing VMs?  Why would you want to stack them?’”

Appenzeller went on to explain that containers typically rely upon the kernel to provide them with the necessary separation that gives them their basic security. “The attack surface of the kernel is much, much higher than the attack surface of a hypervisor,” he said. An attack strategy for a hypervisor would require passing parameters through a homemade driver since a hypervisor is not open to outside functions, he said. But a Linux kernel may have thousands of addressable functions, driving up its potential for exploitability.

It’s an argument we’ve heard before, including from Appenzeller himself, but now it’s been extended. NSX under Cloud Foundation, as Appenzeller and other VMware engineers explained Monday, compels an organization to rethink the concept of “firewall.” In another era, the firewall represented not only the arbiter of access rules but the perimeter of a network’s assets. But as an arbiter of the same rules in the public cloud as on private infrastructure, the extended NSX obliterates the notion of perimeter, and perhaps to the same extent the conventional definition of “endpoint.”

The Whatsoever Strategy

Surprisingly, VMware’s Raghuram made the case Monday that this alternation to the conventional definition may be necessary to minimize the impact of evolutionary changes to the underlying platform, on VMware’s existing customer base ­­– specifically, upon the company’s core users.

160829 Raghu Raghuram (press conference)“One of the reasons we took the course that we did is, we looked at our existing customers and what tools they use,” the EVP said. “Existing customers — operators of IT — need to be able to go on this evolutionary journey without a radically different experience… You’ll see, once we get into the nitty-gritty of how they will actually do it, how they will operate it, the tools are very familiar to our existing markets.

“Having said that, the public cloud is a very different beast,” Raghuram continued. “There is a great learning opportunity there.”

Yes, there may be the need for changes in the organizational models of companies — in the responsibilities that individuals are granted and the skill sets they’ll be called upon to use, he went on, evidently in deference to the needs of DevOps. However, those changes should not impact the way VMware tools are interpreted and put to use by these same people.

That would explain the company’s strategy for putting NSX to use in the public cloud. It wants to avoid changes to the operating model so as to avoid disillusioning IT operators. If that means blurring the existing distinctions between firewalls and the outside world, or what constitutes a service when it’s being deployed into a multiple cloud environment, then so be it. Definitions are fragile things; terminology is temporary. Skill is a resource worth preserving.

Tuesday, the company is expected to announce its formal introduction of the final, commercial version of vSphere Integrated Containers. These are not containers in the Docker tradition, but rather a way of taking standard containers and wrapping them in a special coating that makes them digestible to vSphere, which treats them as virtual machines.

What we expect to learn Tuesday is the extent to which VMware is willing to make its integrated containers the preferred option for containerization. Put another way, will enterprise customers be required to adopt this kind of hybrid container if they intend to use other key features of Cloud Foundation? More on this question, and much more from VMware’s Guido Appenzeller, as The New Stack’s coverage of VMworld 2016 continues this week from Las Vegas.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.