The question is as simple as this: Where do workloads belong? Yes, I know what the vendor’s response is: It depends on the customer. But when it’s the customer asking the question, a response like, “That depends on the customer,” is like you asking your spouse whether you look out of shape, and being told beauty is in the eye of the beholder.
At its Discover 2016 user conference, HPE offered no fewer than three choices for the deployment of workloads. One was, naturally, “The cloud” which refers to a centralization of workloads among a group of servers in a network, but a distribution of functions among those servers, such that services can be provisioned for clients from anywhere quite easily. Another option, “The Edge,” by stark contrast, is a model which HPE has been promoting for its efficiency and speed. It has to do with stationing workloads as close as possible to the clients who use it, decentralizing the functions in a network to reduce latency.
And then there’s “The Machine.” HPE’s “Machine” has been a very nebulous idea, at least insofar as the public has seen it. If cloud dynamics can effectively pool memory and storage resources together, it goes, then a “computer” could ? be reinvented such that anything that could be joined by a trace on a motherboard can instead by joined by a network cable.
As Manish Goel, HPE’s senior vice president and general manager for storage described it during a briefing at Discover, the original “stack” came about as a result of the disaggregation of the basic functions of mainframe computing — compute, storage, networking — into layers. The layers we have today, Goel said, is by the decomposition of the mainframe model into the basic functions of the PC, to enable PC processors, the PC service bus, and PC networking to handle a growing and evolving set of tasks.“The technology drivers were that we had a client/server architecture that became relevant,” said Goel. “We had, therefore, sorts of shared-everything storage that needed to become the source of truth across multiple x86 server architectures. That gave rise to a networking layer because you had to put many, many endpoint servers and integrate them with a data services layer. All of that led to disaggregation, at each of the layers of the stack. Storage became its own layer; network became its own layer; compute became its own layer.”
But that particular separation of layers may no longer be reasonable for any other reason than maintaining legacy, he continued, as memory evolves to become the dominant form of storage. It’s triggering what Goel describes as “the Collapse of the Stack” (thankfully, for us, he means the oldest stack there is). And since the execution of threads by a processor involves the fetching of data from memory, an elimination of the boundaries that had separated memory from storage in the past, calls into question the very nature of how computing — the execution of “compute” workloads — works.
“Persistent memory is going to become more and more relevant,” Goel went on. “And at some point, servers with enough persistent memory that can be managed in a scale-out fashion may become a perfectly valid infrastructure building.”
Ric Lewis, HPE’s senior vice president and general manager for data center infrastructure, elaborated on this: “The goal isn’t that The Machine’s going to be some big box or something that we roll out. The Machine is really a collection of technologies, and a drive to re-evaluate computer architecture based on those technologies. When you have massive pools of persistent memory, you don’t really need a bunch of drives hanging off of it.”
By “drives,” Lewis may also mean here, “volumes.” When we refer to storage volumes, including in the context of containerized environments and microservices, we’re addressing devices on the network using some kind of mapping that probably, at some level, substitutes for “C:\” If we don’t need drives or volumes, and the world is full of object storage, then indeed The Machine can be the only machine there is.
Except, that is, on the other side of the HPE campus, where you’ll find The Edge.
“In a world where we’re constantly told, ‘Take data from the [Internet of Things] and send it to the cloud,’ and we’re only given one choice,” asked HPE’s legendary engineer and general manager for the Moonshot line, Tom Bradicich, “why would we ever not send it to the cloud, but rather compute at the edge?”
It was a quiz Bradicich gave his audience, having faith that they had enough experience to provide a set of knowledgeable answers. There were seven that he would accept. First, cloud communication consumes bandwidth. It introduces security issues. It interjects latency. It creates new centers of cost. It duplicates data across volumes, perhaps redundantly. It introduces feedback loops into the communications scheme. And it creates new pain points for compliance.
What was phrased as an argument against the cloud for every workload, could be rephrased as an argument against The Machine for any workload.
“In these mission-critical applications, when you, for example, need one millisecond turnaround time,” explained Bradicich, “you can’t go to a cloud more than 10, 20, 30 miles away. And how many clouds are that close to the edge?”
Bradicich has responsibility for a line of HPE servers called Edgeline. Although they were introduced last year to address Internet of Things use cases, at Discover, he and his colleagues re-introduced them as what he described as an entirely new product category called “converged IoT systems.” One use case for these edge systems that he perceives combines machine learning and predictive analytics in real-time situations, such as primary healthcare services (a use case we’ll discuss in further detail, in a story to come in The New Stack).
The course Bradicich is plotting for edge computing is the opposite direction of architectural evolution from The Machine, with the cloud as its point of origin. And yet all three are being described as one kind of convergence or another. When I asked Ric Lewis and Manish Goel, which of these directions represents the true evolutionary path of computing, Lewis responded first by saying that all of these directions are equally valid — as you might expect, depending upon the customer.
“We think all of those are valid,” Lewis said. “We think none of them reign supreme. We think that vendors who try to paint a picture of, ‘One of those is the answer…’ are just missing it. It’s a big, huge business. There’s massive, explosive growth in data that is going to saturate more needs in each of those areas than we can even provide. So I don’t see it as an either/or; I see it as, an explosion of data, and new architectures needed to deal with that explosion of data.”
Then I followed up by stating, three directions can’t be converging if they’re diverging. Why aren’t we calling it “choice?”
“They’re not necessarily in conflict with each other,” responded Goel. “It could be, that workload uses that technology and that consumption model for delivery. However, there may be another workload which requires a different technology, or a different consumption model.” For example, an IoT application could use all edge-based processing, stored in a “The Machine-like” architecture, delivered by a cloud service provider.
The deeper problem that all these architectures are attempting to solve, is the proper placement of workloads in the data center. When virtualization first enabled cloud computing, the first cloud platform providers were making the case that every workload would eventually be relocated to the cloud, including those applications executing in real-time. Bandwidth and latency were becoming, we were told, non-issues.
Except they are issues. Which of an organization’s IT workloads were suitable for cloud migration, and which others were best left closer to the customer, was a matter best left to the customer to determine for herself, we’ve been told in the past. Except if storage is in ubiquitous volumes, then proximity is a non-factor, and latency is a constant.
The more we’ve attacked the problems of latency in an attempt to eliminate it as an issue, it has become more of an issue, particularly with respect to real-time workloads. That fact has given rise to edge computing as a valid contender.
There may yet be an eighth factor to consider, to be added to Bradicich’s list: Modern workloads must always co-exist with legacy software. Already, we struggle with the problem of making hypervisor-hosted virtual machines co-exist with containers. If we were ever to migrate to a “The Machine-like” platform, it could only be because we had first resolved the problem of how to make The Machine execute workloads like the cloud or the edge. Put another way, we’d have to make it run OpenStack, Docker, vSphere, and maybe COBOL first.
This is the extent of convergence: not to the point where all our platforms are fused together after the stack, as Manish Goel predicts, collapses. All our workloads and software must co-exist. It converges the way three divergent concepts of computing architecture converge at a conference. It takes layers of abstraction to unbind workloads from the machines that execute them (capital “M” or not). Which is why, if and when the stack collapses, inevitably, some platform will pick it all up again.
HPE is a sponsor of The New Stack
Images from Scott M. Fulton III.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Unit, Docker.