Modern Hardware’s Role in a Software Driven Data Center
While Hewlett-Packard initially began its technology endeavors making audio equipment, its first instrumentation computer was engineered in 1966. It was sold to the Woods Hole Oceanographic Institute and used on research vessels for over a decade. It was designed to interface with over 20 HP instruments and was essentially the first iteration of plug and play integration as we know it. This is all the more impressive given that the HP 2116A had a mere 4k of main memory, and a 20MB hard drive.
In this episode of The New Stack Makers embedded below, we’ll learn more recent HPE hardware, notably the latest Cloudline servers, as well as HPE’s involvement with the Open Compute Project and OpenStack, and things to consider for a more unified DevOps workflow. The New Stack founder Alex Williams sat down with Dave Peterson, HPE group manager at Cloudline Server Products, during HPE Discover 2016 conference to hear more from him about these topics.
Modern Hardware and its Role in a Software Driven Data Center
Peterson launched the discussion by addressing not only how software scales, but the hardware it runs on top of. Rather than a traditional data center approach, Peterson explained that the evolution of today’s developer workflow requires a vastly different approach. “What we’re seeing is evolution from the traditional data center monolith which is highly scaled, very unreliable in an area where hardware reliability is at the essence, with lots of engineering involved,” he said.
When developing software at scale, Peterson went on to note that a new approach is needed for today’s infrastructure, alluding to the old ‘pets vs cattle’ analogy. “You can’t be effective in applying the traditional ecosphere of the data center to a cloud world. You have to change the way you manage the systems and think about it in an entirely different way. Everything from automation of deployment, automation of application updates, how you serve products, and how you maintain servers.”
HPE is addressing these issues by not only creating new platforms and services but the new hardware to power them. One such piece of hardware is the HPE Cloudline series of servers. However, Peterson was clear that contrary to popular belief, it is not a ProLiant server. “Cloudline is about a very different scale of infrastructure. It centers around cost, openness, and how you think about application management and application deployment. It builds the standard cookie cutters that a ProLiant server would have, and it does the job so you can devote more resources to application development, updating applications, and the whole DevOps cycle.”
#hpediscover: Dave Peterson of @HPE_Servers: “You have to change the way you manage your systems." pic.twitter.com/rp4UYOSWLH
— The New Stack (@thenewstack) June 8, 2016
Peterson also noted HPE has a variety of servers built around the Helion OpenStack world, which dovetails well with its contributions to the Open Compute Project. The teams at Helion and Cloudline have continued to join forces in order to provide a better experience for developers, end users, and IT teams working with these servers in their own architecture.
“We’re joining together to make sure we have the right architecture going forward. We drive a joint roadmap. We want to be the best when you put HPE with HPE. We keep working together so we can drive the hardware to match that,” Peterson said.
HPE is a sponsor of The New Stack.
Feature Image: The HP 2100, by ESO, CC BY 3.0.