Next week, the newly-christened Hewlett Packard Enterprise will hold its first Discover conference under the HPE nameplate, in London. As it name indicates, HPE will focus purely on the enterprise, which will be good news for its customers in that it will allow the company innovate and respond to the market more quickly. HPE is no longer the company that makes printers and laptops; that’s the consumer products company HP Inc.
But HPE is also, primarily, a hardware company, and so with any company with both hardware and software in its portfolio, the CIO must ask of where the company’s interests lie, especially if said CIO is chiefly interested in the software side of things (as our readers no doubt are).
Last year, HP made huge investments in OpenStack, largely in order to produce the platform that supports its Helion cloud services. A fair proportion of what OpenStack has achieved over this year was made possible through HP’s generosity.
Yet HP has sold customers Helion — and, by extension, OpenStack — largely through its hardware, via a number of tactics, including designing and selling purpose-built server racks, selling hardware as though it were a service, custom-building servers around specific use cases or for major corporate customers, and, lastly, putting its own spin on Facebook’s Open Compute specifications.
Open Compute is laying down a new and, for server makers, a potentially troubling set of guidelines: Really big data centers that seek to approach Facebook’s or Google’s scale must consider server hardware to be ephemeral, to be cheap and commoditized. If you think Docker has made headway in helping developers to treat virtual servers like cattle rather than pets, Facebook could make headway by helping data center managers to treat physical servers like throwaway batteries.
To stay competitive, HPE needs to address the huge data centers as well as the enterprises. And in finding a way to do this, HP tried everything it could: subdividing racks in ways they’ve never been divided before; clustering storage and memory with completely new cooling technologies; working with Intel to devise radical new pipelines for processor interoperability.
When something does stick with various customers, that sale will bring the software stack — OpenStack, OpenShift, probably Docker, maybe Kubernetes, and very likely microservices architectures. In this way, HPE is triggering major investments in the software we talk about every day, by selling CTOs and CEOs on the hardware we talk about hardly at all.
So when The New Stack joins the new HPE in London next week, we’ll be asking four key questions:
1. What is the HPE developer story and how does that relate to its place in the open source ecosystem?
HPE has an investment in making its Helion platform powered by open source, said Omri Gazitt, HPE vice president of products and services for the Helion line, at Dockercon last week. It looks at OpenStack as core to its infrastructure strategy and Cloud Foundry for the PaaS layer. That’s a play that speaks to the HPE belief that “infrastructure matters,” and it needs to influence the software stack on top of it. As for containers, Gazitt said HPE associates their use with “cloud native,” scenarios, a term companies are using to define services environments that are more application than machine centric. Containers may also serve as a way for companies to preserve back-end systems on premises, where the data resides, with a front-end that integrates, for example, with Amazon Web Services (AWS).
With that approach, using AWS, it raises questions about who will be building those apps that can be used on AWS? Is it the developer who has embraced Docker and its ability to build apps without the need for an opinionated system or the people who will be more comfortable with a more structured PaaS environment. Cloud Foundry, OpenStack and Docker all have unique ecosystems.
So, where does HP fit? Gazitt says it is in the overlap as evident in how it is now integrating Docker into Cloud Foundry. HP uses upstream Cloud Foundry with droplet execution engine (DEE). A user can bring their own Docker container and deploy it into a PaaS. That’s an approach that can bring the Docker environment into Cloud Foundry, essentially marrying the different ecosystems.
2. Is HPE committed to technologies that benefit heterogeneous deployment scenarios? Nobody expects HPE servers to suddenly revert to bare metal boxes with green rectangles stamped on them (the company’s new logo). The new company can be expected to distinguish its brand against its competitors, such as IBM, Lenovo, Dell, and Cisco, although performance cannot be the only metric HPE uses to do so.
According to IDC, in the second quarter of 2015, HP commanded more than one-quarter of the worldwide server market in revenue share, widening its gap against long-time competitor Dell. IBM officially lost share in the same quarter, but that’s mainly because it completed the sale of its x86 server business to Lenovo. That move made Lenovo instantly the #4 player in the market just below IBM, tied with Cisco.
Why does this matter? The new HPE can afford to be bold. It is in a position to create a technology for deploying services that make data centers largely reliant upon HPE tools and techniques. It may get frowned on for doing so in some quarters, but if HPE’s corporate customers sign on, no one will care to count them. To what extent will HPE cash in on its success? And will developers and DevOps professionals who work in enterprises with HPE servers on-premise, be compelled to change their practices in HPE-suggested, or HPE-mandated, ways?
In other words, those who forget the Microsoft modus operandi of the 1990s may be doomed to repeat it.
3. What will Helion and Stackato become under the new brand? Helion started out as HP’s public cloud nameplate, based on OpenStack (IaaS) and Cloud Foundry (PaaS), and offered as an alternative to Amazon. When it looked like that Amazon would continue to leave HP in the dust, HP re-cast Helion as its software stack for running private and hosted clouds. HPE is moving forward with building the stack portion of Helion into a full-scale development platform, with special support for containers and microservices.
But HPE has a special relationship with VMware. Large-scale deployments of Helion involve the use of VMware vSphere Distributed Switches, which bring vSphere workload management into the picture. And as we’ve reported here, vSphere’s idea of containerization is dramatically different from what we’ve come to know as microservices architecture.
Under Project Photon, VMware would wrap containers in protective coatings that make it appear, to vSphere, that they’re ordinary VMs — thus changing how containers would network with one another, making networking dependent upon vSphere’s overlays. With one hand, HPE may be enabling microservices; and with the other hand, VMware may be making interesting little tweaks to them.
And then there’s this item of note: We downloaded an Excel spreadsheet of the Discover 2015 program, complete with descriptions, and in 799 separate session items for next week, the word “Stackato” did not appear once. HP acquired Stackato from ActiveState last July. On November 1, HPE rebranded Stackato as “HPE Helion Stackato.” It continues to be supported and updated, but will it be HPE’s PaaS for OpenStack — the company’s counterpart to Red Hat’s OpenShift?
4. How will HPE’s moves in SDN impact the architecture of open systems? Though we didn’t see “Stackato” in the topic list, we did find some 15 occurrences of “SDN.” As network appliance leader Cisco has discovered after its big push into servers, the software nature of SDN networks reduces data centers’ dependencies upon hard-wires and Layer 2 switches.
Your basic Docker network architecture is not all that sophisticated, to be blunt. Each container has its own port, and a container-specific hub forwards packets between ports, using itself as “Grand Central Station.” That gets problematic for microservices in a hurry, which is why Docker Inc. invested in SocketPlane, and why Weaveworks has become so prominent, so fast. In modern microservices architectures, containers have their own IP addresses.
The layer that determines how this local subnet communicates may be run by the orchestrator. Or, the underlying hardware of the servers on which these services run, could play a factor. Intel has been playing with how to make its processors expedite functions in open virtual switches (both Open vSwitch and, up until recently, its own proprietary alternative). In the past few years, Intel has been opening more and more of its Xeon processor hardware to customization by its own major customers, HP among them. Intel’s aim is to enable software to dive deep below the operating system and the various services layers, and take direct advantage of functions embedded on its chips.
SDN is Intel’s “killer app” for these server chips. But those capabilities can’t be realized if they can’t be programmed. At some point, these hardware-assisted SDN superpowers need to be surfaced. It needs vendors like HPE to make this happen. Why does this matter to you? Because you may be building a microservices application right now, that may change, or need to change, within the next year.
The New Stack editor-in-chief Alex Williams contributed to this story.
Docker, HPE, and Intel are sponsors of The New Stack.
Feature Image: London Eye by Luis Llerena, on NegativeSpace.co, under Creative Commons license.