TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Edge Computing / Tech Life

HPE Orchestrates an Internet of Things at ‘the Edge’

Jul 14th, 2016 7:07am by
Featued image for: HPE Orchestrates an Internet of Things at ‘the Edge’

We often talk about hybrid cloud computing as though every server can be homogenized; every workload can be disaggregated, and whether a task is staged in Okinawa or Okmulgee has no impact on its performance or even its outcome. We talk about orchestration at scale as though a magnified data center does not become more susceptible to the laws of physics.

But as we saw at Intel Cloud Day last March, there are organizations whose workloads are more susceptible to the latencies introduced by continually scaling workloads and variably sized processor caches. They can’t process their workloads on ordinary cloud platforms.

We sometimes characterize these organizations as exceptions to the rule, or minority use cases. But as IBM, Intel, and now HPE are demonstrating for us, these are not outliers. They’re organizations with just at least zeroes beside their name in the Fortune list, and maybe just one.

“We all hear about IoT [the Internet of Things], and a lot of times, we think about it in a consumer sense. I’ve got one of those watches on that will track how many steps that I take.  And that’s a kind of consumer view,” explained Ron Neyland, HPE’s director of software and solutions, during a session at the recent HPE Discover conference in Las Vegas. “We’re focused on things out at the edge, in the industrial world. What we’re trying to do is help our customers take information that they have previously not acquired and not collected, and actually bring that in and gain new insights that help them add value to their business.”

Neyland is describing what HPE now describes as an edge computing scenario: a use case where the company proposes, it makes much more sense to have processors and application workloads closer to the client than in some far-off data center or public cloud. It’s the “smart client” argument, reborn.

More than Hamburger

“In a world where we’re constantly told, ‘Take data from the IoT and send it to the cloud’ — which is not bad, don’t get me wrong, but when we’re only given one choice — why would we ever not send it to the cloud, but rather compute at the edge?” asked Tom Bradicich, HPE’s vice president and general manager for a server line called Edgeline.

“Why would you send everything back to a data center, which could be miles away, across borders, exposing you to security threats, corruption, take a lot of time, be costly, and burn up bandwidth? Why would you do that all the time, when you can take all that big data coming from the things, and compute right here at the edge?”

I gave Bradicich the counter-argument, as has frequently been presented to me by experts in the containerization and orchestration space.  They believe in what has occasionally been dubbed a “FedEx logistics” system, figuratively speaking, where all packets are sent to a central source (a “Memphis,” if you will) before being redistributed to their end source. If you centralize your workloads on a single backbone, they say, you cut out the middleman, you eliminate much of the excess middleware, you reduce bandwidth consumption as a whole, and it’s easier to secure because the number of steps in the process is reduced.

Besides, they argue, IoT sensors often don’t operate in real-time because they’re low-power, so they send their data in periodic batches rather than in streams. I then asked Bradicich, why are they wrong?

160607 HPE Edge 01 Tom Bradicich

“That answer is like me talking about cuisine, and explaining to you a hamburger, and then stopping,” the veteran of the old HP and IBM responded. “What about Italian food? What about soup? The IoT is huge. And to be so naïve as to say there’s no real-time response required! Whoever says that, probably sells that. And they only sell hamburgers, so they say you gotta have hamburger all three meals.”

In Internet routing, “the edge” refers to servers stationed closest to the client. A content delivery network, such as the kind operated by Akamai, relocates high-volume content to more accessible locations. This is almost what Dr. Bradicich is referring to here. Imagine the borderline in the Internet that serves as a demarcation point between the service provider domains from your client domain – if it helps, picture a river between the two. The edge of a CDN is at the opposite side of the river. Bradicich’s edge is on your side.

Granted, HPE is in the business of selling hardware. Edgeline is a range of servers geared toward IoT applications. Its value proposition is based on the idea that x86 servers make better hubs for distributed sensors and embedded devices in an IoT application, than some dedicated appliance – or, as is more often these days, a virtual IoT appliance in the cloud.

But Bradicich’s use case arguably makes some sense. In an IoT app, data is periodically dispatched from sensor devices (or, as may be the case with smartphones or tablets, devices with sensors), often to servers running Hadoop big data engines or Apache Spark analytics engines, but sometimes to data warehouses. There, the data is continually evaluated by an analytics engine — and in a growing number of deployments, by machine learning systems.

An IoT in the Desert

When you consider the amount of data that needs to be addressable as a single, contiguous volume by multiple clients simultaneously, you realize that an asynchronous network scheme — which is what the Internet is — may not always work to your advantage.

One case in point emerged from an engineer in the audience: the Atacama Large Millimeter / submillimeter Array – a massive radio telescope array stationed on the Chajnantor Plateau in the Andes mountains of northern Chile. It’s an Internet of Things application, but it’s not obvious what the “things” are until you look closely.

For the radio-telescope to remain fixed on a target signal, about 800 mirrors, called octagonals, need to move in a coordinated fashion. Their movements are so definitive and their positioning so precise that air temperature plays a role. Real-time acquisition and control are absolutely required. And there are a number of factors preventing the control programs from being stationed in a data center at an altitude of 16,000 feet in the middle of the Atacama Desert.

It’s a use case that caught the attention of James Truchard, Dr. Bradicich’s long-time colleague, and the founder and CEO of National Instruments. That’s a manufacturer of data acquisition and measurement systems, and a partner of HPE in the implementation of a distributed process architecture that Bradicich calls “deep compute.”

160607 HPE Edge 02 James Truchard

“Vision is the killer app,” Truchard told me, “that will provide processing the bandwidth that’s being used because you’ll put vision everywhere. There’s no reason not to.  It’s cheap, the cameras are cheap, but you’ll need the processing somewhere. If you’re building a house, you’ll put cameras everywhere so you can watch it being built. So cameras will be on machines — anything you want to monitor.  And you may want to monitor autonomously, too.”

Whose Cuisine Reigns Supreme?

From a technical standpoint, Bradicich confirmed, there’s nothing stopping an organization from deploying a private cloud platform such as OpenStack on edge computing systems. In fact, there may be every reason to do so. So this is not a dispute about restricting the reach of cloud dynamics.

Nevertheless, HPE now has three models of computing that it is promoting simultaneously. There’s “the cloud,” which it envisions as a centralization of resources within a fluid data center whose size and location are variables; there’s “The Machine,” the company’s model for re-envisioning the purpose of memory and storage, whose explanation was recently presented by a group of Starfleet officers; and now there’s “the edge,” in which CPUs and GPUs absorb distributed workloads away from centralized data centers.

At a gathering of HPE senior vice presidents, I asked them where one model ends and the other begins.

Left to right: HPE SVP/GM for Storage Manish Goel; HPE SVP/GM for Data Center Infrastructure, Ric Lewis

Left to right: HPE SVP/GM for Storage Manish Goel; HPE SVP/GM for Data Center Infrastructure, Ric Lewis

“We think all of those are valid,” responded Ric Lewis, HPE’s senior vice president and general manager for data center infrastructure.  “We think none of them reign supreme, and we think that vendors who try to paint a picture of, one of those is the answer, or ‘the cloud’ is the answer to everything, or IoT is the answer — they’re just missing it. It’s a big, huge business. There’s massive, explosive growth in data that is going to saturate more needs in each of those areas than we can even provide. So I don’t see it as an either/or; I see it as, explosion of data and new architectures needed to do with that explosion.”

“IoT is a workload,” elaborated Manish Goel, HPE’s senior vice president and general manager for storage, responded to my follow-up by redrawing the three models as interlocking segments. “Machine is a technology or a product. Cloud is a delivery model or a consumption model. They’re not necessarily in conflict with each other; it could be, that workload uses that technology and that consumption model, to be delivered. However, there may be another workload which may require a different technology or a different consumption model.

“It could very well be that the IoT uses on-edge processing,” he continued, “which is stored on persistent memory, Machine-like architecture, delivered by a service provider — whether it’s a public or a private cloud — for what happens to that data, ten petabytes per cloud per day.”

The point these HPE engineers and executives have just made is that there is no homogenous data center architecture that fits all use cases.  Of course, that’s good news for anyone who produces a variety of different hardware types. But it’s a compelling counter-argument to the proposition that every operational model conceivable can be organically transferred to the same platform. So if you see certain pieces of the cloud being deployed just on the other side of the proverbial river from you, now you know why.

HPE is a sponsor of The New Stack

Feature image: The Atacama Large Millimeter / sub-millimeter Array (ALMA) courtesy the European Southern Observatory, in the public domain.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.