Containers / Sponsored

HPE’s Developer Story: Built on Hardware, Defined by Software

11 Dec 2015 10:33am, by

Open source is not a developer story in itself. Neither, for that matter, is an investment in OpenStack or in containerization. You can’t just be transparent and expect automatically to be interesting, otherwise, the United Nations would be more riveting than “Breaking Bad.” A company that has a developer story, the way we see it, builds a platform for developers to work, and gives them the tools and the encouragement they need to take the best advantage of it.

We came to HPE Discover 2015 in London last week in search of Hewlett Packard Enterprise’s developer story. Can a hardware company born from the loins of … well, another hardware company, which intends to embrace the principle of software-defined infrastructure, open itself up to the needs and wants of the people who would produce the software upon which the world’s server platforms, including HPE’s, would rely?

Last week, we began our coverage of this conference by putting forth the four questions we intended to ask most pointedly of HPE and its partners. Here is what we found:

1. The HPE developer story and its place in the open source ecosystem. The most telling aspect of the responses we received to the question of open source came from how easily that the underlying concept got reversed: Open source, we were told, has a very important place in the HPE ecosystem.

There’s an element of scale there that bears noting, and that’s reminiscent of the many years leading up to Windows Server 2008 when open source was said, by folks like Steve Ballmer, to be critical to the Microsoft ecosystem.

HP SVP for Cloud Bill Hilf

HP SVP for Cloud Bill Hilf

“Docker is an amazing technology, but in and of itself, it is not an application platform,” said Bill Hilf, HPE’s senior vice president and general manager for cloud platforms. “It’s a container technology. We use Docker deeply inside our Cloud Foundry product called Helion Dev Platform. You can use it with Kubernetes, Marathon, lots of different ways that you can skin that.” Hilf continued:

“For the general-purpose enterprise developer, which is really our target, we’re not going after the science shop that’s trying to do all their own application platform.”

“The general-purpose enterprise developer who wants .NET support, they want Java support, they want something consistent, they want it heavily tested, they want a long-term support model against it — all of those characteristics of the enterprise buying patterns — for that developer, they will use Cloud Foundry,” Hilf said. “They will use Docker, I think, within a larger construct of an application platform.”

The picture Hilf drew appears perfectly conventional: A general-purpose developer is skilled with a language or a set of languages. Such a developer seeks “consistency,” to borrow Hilf’s term — something that doesn’t upset the order of things, or place too many demands for new skills. Cloud Foundry appeals to that need for consistency, and from this vantage point, Docker and containerization is merely a deployment model. This is how HPE’s stack is stacked.

“It’s about your right mix,” stated Paul Miller, HPE’s vice president of strategic marketing, in a session with reporters. “And your right mix can be a Linux workload, an Azure workload. Of course, we’re a big believer in OpenStack, we’re a big investor in OpenStack. But it will not be an OpenStack-only world or a Linux-only world or an AWS-only world or a Eucalyptus world … it’s going to be everything. So clearly Azure is one of the major players in that space; we need to be able to ensure those workloads work great on our on-prem infrastructure, as well as the ability to grow and manage them off-prem.

“It’s all about choice,” Miller continued, “but doing it in a way that makes that customer’s experience simpler and more predictable.”

2. HPE’s commitment to technologies that benefit heterogeneous deployment scenarios. There’s a very clever answer to this key question, one which reminds us that the “Hewlett” and the “Packard” in the new company’s name are not there for mere adornment.

As we noted before the conference, HPE is in a leadership position in the server market, and can afford to be bold. It has it within its power to produce a line of servers that create a dependency within data centers upon HPE administrative tools. The company is clearly attempting to do that, by building a line of Synergy servers whose administrative console, HPE OneView, is fused into its firmware.

But does this fusion extend to the application environment, forcing containers and Cloud Foundry apps to conform to HPE’s rules of the assembly line? The straight answer is no, and the clever answer is why.

151204 HPE Synergy demoAs HPE engineers demonstrated to The New Stack last week, Synergy’s so-called “composable infrastructure” produces compartmentalized units of compute, storage, memory, and bandwidth resources out of the bare metal components of pooled-together servers. But from the perspective of both the application and the platform on which the application is hosted, these units look like servers — not virtual servers, physical ones. These compartmentalized units are not VMs. Indeed, a first-generation virtualization environment like VMware vSphere would provision VMs from these apparently physical resources.

A Docker environment, meanwhile, seamlessly integrates with the physical platform by way of a Docker plug-in created by HPE, in consultation with Docker Inc. So, as engineers assured us, any orchestrator including those that Bill Hilf listed, will integrate with container environments in Synergy just as if the servers hosting them were bare metal.

“The next innovation — composability — is really an architecture where you bring together the hardware and the software. It’s software-defined everything,” explained Chris Cosgrave, HPE’s worldwide chief technologist, during a Wednesday session. “It’s unlimited scalability there, and really it’s the first time where we’ve started to treat infrastructure purely as code.”

HPE's Chris Cosgrave and Alastair Winner

HPE’s Chris Cosgrave and Alastair Winner

Cosgrave was outlining the historical progression from a data center where the resource capabilities are all defined by hardware, to one where they are all declared in software.

“Composable infrastructure is like a Rubik’s Cube,” he said, “where you can twizzle around any combination of storage, compute, and fabric to support the particular needs that you require there. So what’s our vision here? Complexity is driven by the physical infrastructure. So what we’re trying to do is provide a cloud-like experience for IT, so that they can really create and deliver value continuously and instantly.”

We asked Cosgrave and his colleague, HPE vice president Alastair Winner, whether the OneView environment will apply itself to managing containers, or will it be adapted over time to co-exist with orchestrators like Kubernetes?

“OneView is about the bare metal, in terms of managing the physical infrastructure there,” Cosgrave responded. “Certainly we worked extensively with Docker to build the connections between OneView and their technology,” Winner added, before stopping short of giving away the announcement that CTO Martin Fink was about to make during his Day 2 keynote.

Up to this point, HPE engineers had presented us with a picture of a bare metal environment and an application environment, decoupled at a very low level by a layer of abstraction — a decoupling which should please microservices architects. But later that afternoon, Fink ended up discussing a strange unification of the two layers, which ended up confusing us anyway: a unification made possible, as Fink described it, by the elimination of all that pesky isolation that makes security on container platforms so messy.

It would not be the first time that a number of chefs within an organization came into the kitchen, each with his own recipe for the same dish.

3. The Future of Helion and Stackato under the new brand The very first thing we learned from Discover was that HPE and Microsoft would extend their existing partnership to make Azure, not Helion, the new preferred public cloud for HPE server platforms. “Preferred” in this context means exactly what it used to mean back when Microsoft doled out prime icon locations on the default Windows desktop: Unless you make a choice of Azure or Google Cloud or whatever you care to bring to the table yourself, OneView and other HPE tools will select Azure by default.

151204 HPE Synergy demo 02Helion becomes the company’s brand for developing services on hybrid cloud platforms that involve HPE’s hardware. As Bill Hilf explained it, Helion’s main job will be to “take the complexity out of an OpenStack deployment.” Telcos and Internet service providers requested that multiple data center deployments of Helion OpenStack be automated, Hilf noted, and HPE responded last month with a new version that addresses those requests.

“We use Docker deeply inside our Helion Dev Platform,” Hilf added, “so every application that you build inside that development platform instantiates as a Docker container. And we added a new technology to this release called the Helion Code Engine, which one customer told me is like ‘CI/CD for Dummies.’ When a developer does a merge inside a git repository, we have a Code Engine that does an automated build, test, and deploy around that code, making it very, very easy for enterprise customers to bring to their developers the power of DevOps with a true CI/CD toolchain embedded inside that development platform.”

Helion becomes a term used to describe HPE’s developer story more and more. Yet Stackato is clearly fading into the background, contrary to what HP was promising last August when it acquired the PaaS platform from ActiveState. HPE’s Stackato page now bears the blazing headline, “HPE embraces Docker as the engine for HPE Helion Development Platform.” (This from the company whose cloud chief said Docker was not an application platform.) Yet that headline links to a blog post from HPE cloud evangelist Stephen Spector, explaining how Docker is being leveraged to produce a managed Cloud Foundry service, not a managed Stackato service.

Stackato’s documentation still exists, hosted by HPE, but the service is beginning to look a lot like deprecation. The word was barely mentioned anywhere on the Discover show floor.

4. HPE’s moves in SDN and how they impact the architecture of open systems It was clear, with respect to the topic of containerized workloads, that not every HPE executive with whom we spoke completely understood the message being articulated by every HPE engineer with whom we spoke: Decoupling of resources is the key to the success of composable infrastructure. It enables data centers to continue hosting what they’re hosting while simultaneously enabling IT to provision exactly the servers needed, when they’re needed.

So it should be no surprise that this equally important question, with respect to software-defined networking, barely got a few moments of floor time. The real answer to this question lies in some of the documentation HPE published last week, one piece of which describes the critical Composer component of Synergy — the hardware module that handles service provisioning through OneView in firmware.

One piece of documentation [PDF] contains a phrase for SDN services that is so new that a Google search for just the first three words of the phrase turned up only this document. It’s the telling phrase, “Fabric disaggregation optimizations to match resources to workloads.”

There are two possible things this could mean. One, HPE could have written all its favorite words down on cubes, shook them up in a cup, spilled the cup, and wrote down all the words that landed face-up.

It could also refer to the concept of disaggregation as applied to fabric and other hardware resources, as proposed in this 2013 UC Berkeley research paper [PDF]. Entitled “Network Support for Resource Disaggregation in Next-Generation Datacenters,” the paper actually does such a good job of explaining the goals of HPE’s composable infrastructure initiative that HPE may want to pay UC Berkeley a phone call.

The current datacenter usage model is heavily based on the server-centric architecture. While physical servers in datacenters have evolved to server virtualization or other comparable technologies, they are still all centered around the concept of “server,” which aggregates slices of hardware resources within a server. The operators/schedulers plan virtual machines (VMs) to meet the computational demands and place jobs across the VMs.

In contrast, the usage model of a disaggregated datacenter does not necessarily follow the same approach; since computation, storage, and I/O functions can be completely disseminated across the datacenter, we do not need to restrict our usage model within the VM-oriented architecture. However, we note that the VM model can be still useful, as in this way we can leverage the existing software infrastructure, such as hypervisors, operating systems, datacenter middleware, and applications with little or no modification. Thus in this paper we assume that computational resources are still utilized by aggregating them to form VMs, while each resource is now physically disaggregated across the datacenter.

This is exactly the message that HPE engineers worked to give us: The new stack, to the extent that it refers to software, does not have to change to be better facilitated by disaggregated hardware. Think of disaggregation of hardware resources as analogous to decoupling of software resources. As the Berkeley team points out, this creates a situation (perhaps a bit dangerous from HPE’s perspective) where customers are free to choose disaggregated components from any of multiple vendors. This approach can apply to SDN just as easily as it applies to containerization.

Channel to the Tower Bridge

If there is any key message we learned from HPE Discover 2015 in London (besides the fact that London is a magnificent city), it is that hardware vendors are paying attention to software developers more than at any time this century. Now, for us to comprehend the changes taking place beneath the platforms that support our software, we had better pay closer attention to the hardware side of the stack.

Docker and HPE are sponsors of The New Stack.

Photos by Scott M. Fulton, III.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.