VMworld 2018: Who’s Responsible for Programmable Infrastructure?
Supposing a great many attendees at last week’s VMworld 2018 conference in Las Vegas could jointly articulate the pre-eminent question working its wayward way towards the tip of their tongues, it would sound like this: If the future truly does include a paved route for infrastructure-as-code, then are our jobs as network operators and administrators in jeopardy?
That uncertainty appears to center around a lack of specificity about what these operators’ future roles will be, once their network infrastructure has evolved to the extent that it incorporates network behavior monitoring, machine learning, and an adaptive service mesh. When infrastructure becomes truly programmable, who — or what — will be responsible for it? More to the point, will it be someone other than the people in charge of it now?
The subdued mood first came to light as the company revealed what should have been a momentous update to one of the core technologies of VMware’s programmable infrastructure. Described by some as “microsegmentation 2.0,” it’s a revision to the idea of applying network and security policies to portions of the network that have been designated according to their function, as opposed to their address. With microsegmentation 1.0 — which The New Stack was one of the first publications to cover in detail three years ago — a layer of abstraction allowed more nebulous objects, such as containers, to be treated to explicit policies.
That model demanded changes for the microservices era, where containers have more and greater reasons to be both plural and ephemeral.
“We’re enabling organizations to secure subnetworks, and separate sensitive applications and services as never before,” explained VMware CEO Pat Gelsinger during his event keynotes. “This idea isn’t new; it just wasn’t practical before NSX,” he continued, referring to his company’s principal platform today, its network virtualization layer.
“But we’re not standing still. Our teams are innovating to leap beyond to what’s next, beyond microsegmentation,” Gelsinger went on. “Imagine a system that can look into the applications and understand their behavior and how they should operate. We’re using machine learning and AI, instead of chasing malware, to be able to ensure ‘good-ware.’ That that system can then lock down its behavior so the system consistently operates that way.”
During a press conference, VMware Chief Technology Officer for the Americas Cameron Haight went into further detail. “Microsegmentation provided us the ability to have a zero-trust environment between virtual machines,” he told reporters, “to protect those communications between VMs. Now we’re extending that zero-trust environment… into the actual application itself. We’re combining the capabilities of NSX and AppDefense to provide that capability.”
AppDefense is the company’s application behavior monitoring tool, which it announced last week would be incorporated for the first time as a built-in part of the Platinum SKU of its vSphere environment. In a demonstration during the Day 1 keynotes, Vijay Ganti, who heads the company’s machine learning and AI research for SaaS security, demonstrated AppDefense’s approach to microsegmentation, which will involve training it to evaluate the behavior of executable components inside VMs.
“For the first time — think about this, for the first time — you’re looking at the infrastructure through the lens of an application,” said Ganti. Although he described the behavior monitoring process in the context of VMs, one can “imagine,” to borrow Gelsinger’s word, a scenario where each of the IP addresses being monitored resolves to a container or to a pod instead.
Responding to a question from Gelsinger about how AppDefense would know which behaviors to classify as good and which as bad, Ganti responded by stating AppDefense would institute a method by which good behaviors would be certified, based on the modeling of “millions of instances of good-ware.” Though he did not say so explicitly, Ganti suggested that these instances would not necessarily be contained within the same customer network — that AppDefense would be collecting behavior patterns from all of its customers, and leveraging their combined histories to determine what “good” behavior looks like.
“We reduce the attack surface in two distinct ways,” said Tom Gillis, VMware’s new general manager for networking and security, formerly CEO of cloud operations firm Bracket Computing, and before that, a vice president for security technology at Cisco. “The first is with our AppDefense technology. We’ve integrated this into vSphere itself, and we focus on identifying what we call the ‘known good.’ That’s the list of things that should never change, and the list of things that should never happen.”
As one example, Gillis went on, a binary file should not be written to. Even though the operating system can mark such a file as read-only, once a malicious payload has had its privileges escalated to root level, no other root-level process can halt a root-level process. “At the hypervisor layer, we are like effectively ‘super-root,’” he continued. “We can enforce these known good policies, these almost common-sense policies: ‘Don’t modify your binaries.’ Web servers don’t spawn root shells. Basic, common-sense, runtime integrity-type things. We don’t know anything about the exploit. We don’t care. But we know these are behaviors that should never be allowed, and therefore, that reduces the possibility of an attack.”
Gillis made a conscious effort to divide software-defined networking from networking, and to point out that the two efforts take place at different levels of a distributed computing system. While VMware’s marketing may portray the entire network as being transformed by the company’s next wave of virtualization, Gillis tried to bring the future out of the clouds and back down to earth.
“When a customer adopts SDN,” Gillis said at one point, “networking doesn’t go away. The bits need to move from A to B. So don’t ask us to do your fabric management. That’s not what we do. We’re the wrong tool for that. Cisco, Arista, Juniper — they do a really, really good job of delivering high-performance, high-reliability, highly available, highly scalable network infrastructure.
“So networking — it’s not like they don’t have a job, don’t have a say, don’t have interesting things to do,” he continued, along a stream of consciousness that could very well have crossed over some knife edges and over a cliff. “It’s just more about, with the right tool for the right job, where do I live?”
Getting Past the First Question
One analyst pointed out, in his experience, how he’d observed organizations that had adopted NSX, had successfully spun up their VxLANs, but from there, were afraid to make changes to the setup. Those in charge of vSphere would object to touching the network, while those in charge of networking perceived NSX as “not hardware,” and thus out of their purview. As a result, these organizations had built their own de facto silos.
To respond to such questions, Gillis had brought on stage Julio Arevalo, Jr., a senior systems manager with Chicago-based Alliant Credit Union. Arevalo acknowledged that similar divisions crept into his own organization, at least at first.
But then, departments realized they had a mutual dependency upon one another. The infosec team, for instance, could craft security policy, but they needed the systems team to implement it. “When I first started, it wasn’t that easy,” said Arevalo, “but now I can say that we all trust each other now, we understand what our goal is, and we work together to make sure we’re doing what’s best for our members.”
It was here where Gillis interjected that his new company was doing the best it could to provide products that were relevant to both the virtual infrastructure (VI) and network teams. That’s a slightly different message from the one that appeared to say, there are network hardware companies in the world, and hey, why don’t you check out Cisco?
Gillis’ and Arevalo’s session encapsulated the point of contention that organizations are facing today, some of them only just now. Yes, the human jobs of networking and systems management should not be limited to the extent of the platforms and tools they use. But to the extent that people’s skills are judged with respect to those tools, it’s impossible not to evaluate their jobs, and thus to forecast the relevance of those jobs down the road, in the context of those tools. Certainly, NSX is not the same as low-level network infrastructure, but it absolutely was designed not just to decouple applications from that infrastructure, but to assume the support role that was one attributed to that infrastructure.
Inescapably, these tools and platforms are the substrates of these information workers’ livelihoods. It’s one thing for VMware to say it doesn’t do fabric management, but there’s no escaping the fact that the decoupling effect has ripped the seams of people’s careers. Mending those seams was a successful task for Arevalo’s credit union. But that success won’t be guaranteed for everyone.
First, they’ve got to get together and talk. That means asking the questions that are aching to escape their shuttered mouths.
VMware is a sponsor of The New Stack.
Photos by Scott M. Fulton III.