Cloud Native

OpenStack Summit Austin: How Many Different OpenStacks Do We Need?

11 May 2016 8:25am, by

The New Stack frequently paints a picture of containerization as the new trend that is sweeping away the old system of virtualization. But recently-published research tells us that, even though two developers in five believe a fully containerized infrastructure could replace the hypervisor-driven environments in their data centers, nearly nine in 10 mount their container orchestration systems on hypervisor-based infrastructures. Both VMware’s vSphere and OpenStack share responsibility for why.

For a majority of data centers, hypervisors are the support structure for their workloads. While The New Stack has often framed the key evolutionary question around containerization as whether it will eventually supplant hypervisors, for many of the OpenStack customers we encountered at the OpenStack Austin Summit, that’s not the issue at all. Their developers run their container environments inside VMs anyway, asserting that the VM envelope gives containerization that layer of security and access protection that containerization may natively lack.

At its core, the principle of OpenStack is to pool together resources from disparate servers, forming single pools of compute, storage, memory, and network resources. Those resources may then be delegated to services in a fashion similar to how customers work with Amazon AWS. Ostensibly, the purpose of these resources is to support virtual machines — specifically, the class of VMs designed to be hosted by hypervisors.

“OpenStack sits at a very central position in the total stack,” said Lars Herrmann, Red Hat’s general manager for integrated solutions, in an interview with The New Stack. “If you look at the total solution that’s emerging… what we’re increasingly understanding and finding is, OpenStack is really the infrastructure foundation for a lot of good things that sit on top of it.”

160428 OpenStack Summit 02 (Lars Herrmann session)

 

All For One. But One For All?

This is where we’re often told that there are many types of customers. And Red Hat is clearly aware that multiple customer segments do exist.

“You can think of customers as a pyramid,” explained Red Hat’s general manager for OpenStack, Radhesh Balakrishnan, in a separate interview with The New Stack. “At the top end of the pyramid are customers with enough in-house IT manpower and capacity to take on a journey. Next tier is customers who are comfortable with, ‘Hey, look, I can manage it, but somebody needs to stand it up for me.’ They work with system integrators — we work with Wipro, Accenture, Tech Mahindra.

“The next tier are customers who are looking for, ‘Hey, I need an appliance, or I need a managed service,’” Balakrishnan continued. “While there may be a degree of loss of control the reality is that these different consumption models are increasing the broad-based approach of OpenStack. That’s a positive way to look at OpenStack, too: It’s going to reach the far corners of the IT universe, which actually wouldn’t have consumed OpenStack because of the friction — they don’t have the people, [or] the know-how. It’d take them years to get skilled.”

Still, Red Hat is aware of the dangers of fragmenting the market too much.

“We really believe, in order [for OpenStack] to sit in this central core, enable all these kinds of workloads, and all these huge hardware solutions around it, fragmenting this into two or multiple ‘spins,’ or ‘forks,’ or whatever, would really be counterproductive,” said Red Hat’s Herrmann. “The need is in creating this vast ecosystem of solutions for network, for compute, for storage, for management, for security, for continuous integration and delivery — all these things assemble around OpenStack as a foundation.

“If we would break this into multiple pieces — like, ‘Here’s your telco OpenStack,’ ‘Here’s your enterprise OpenStack,’ and God forbid, maybe another one for public cloud people — you would actually fragment that ecosystem,” he continued.

Inverted Pyramid

That there are multiple consumption models for a broad-based ecosystem is very difficult to dispute. Today, Balakrishnan is saying that the dividing lines between consumption models distinguished how OpenStack is serviced.

Thus, Red Hat’s theory is that OpenStack can be a single assembly of components, the proper arrangement of which, for any one customer, may be — to borrow a term used by high-end customers we heard from at OpenStack Summit — “cherry-picked.” Just who does the picking depends, once again, upon the expertise level of the customer — those who lack the expertise will work with integrators to one degree or another, all the way up to making the full decisions on their behalf.

In our talk with Balakrishnan, he characterized OpenStack’s top 1,000 accounts (the ones with spending power) as “having the in-house knowledge, the capacity, as well as the desire, to actually own their destiny.”

Elsewhere in the ecosystem, however, EMC (soon to be part of the rebranded Dell Technologies) has published a reference architecture for manufacturers of “out-of-the-box” OpenStack solutions, including software-defined storage appliances. At a panel session during Day 1 of OpenStack Summit, V.S. Joshi — the CEO of mobile apps lifecycle management company TrintMe, but who represents EMC at major conferences, spoke of major enterprises as the groups more likely to hire someone or something else to solve the OpenStack configuration problem for them. That portrayal appeared to have less to do with who these customers are, but what they do.

“If you have customers like CERN, AT&T, Yahoo, PayPal, eBay — for these customers, they’re not ever going to go with an appliance model,” Joshi admitted to attendees, “because they want tremendous customization — customization to an n‑th degree… But then, when you come to enterprises, all the enterprise folks don’t have the talent… that can do this thing.

160426 OpenStack Summit 04 (VS Joshi panel)

“OpenStack, as such, is a very complex thing,” Joshi elaborated. “There are 20-plus projects, and there are 20 million-plus lines of code written over there. The person has to go through hundreds of decisions, as such.  Assembling the stack is a problem. After you assemble the stack, maintaining the stack is a problem. The enterprise guy over there — he wants the whole damn thing to run. That’s what he wants. He has all the VM-related experts… What he doesn’t have is someone who can understand Python, or who has done something in Python. What he doesn’t have is… all the hardware expertise. So for an enterprise of certain size — a capacity of half-a-rack to 10 racks — having somebody already figured out all these pain points, already having a certain view of an upcoming solution, that is the best way to go for them.”

It was a statement that an infrastructure architect for a major airline, seated near the front row, immediately rose against. His argument was that out-of-the-box solutions certainly must contain the features that an enterprise’s engineers require to expand them further outside the box — he listed Cloud Foundry and Docker as examples. But they must also contain other tools these same engineers need to maintain PCI, SOX, and HIPAA compliance while these expansions take place. “Out-of-the-box” solutions tend to present enterprises with a box. The ecosystem, this engineer articulated, must and will expand outside the box, or enterprises the size and stature of his will reject it.

The pre-packaged solution argument, however, is not a foreign one to this crowd. Rackspace, which founded the OpenStack Summit and is largely responsible for the platform’s creation as well, has repositioned itself as a management provider for OpenStack. Competing with Rackspace in this space are Cisco, with its Metapod service; and Platform9, whose systems engineer Cody Hill — formerly the lead cloud architect at GE — appeared on this same panel.

“The issue that we [GE] had with bringing in the appliance,” responded Hill to the airline systems engineer, “is that we had already validated that we used this vendor for this type of hardware. And if we changed that, we had to change all of our documentation. We had to use this hypervisor vendor because we had already made that decision, Compliance had signed off, and we’re done.”

It was one of the unspoken issues throughout the entire history of OpenStack: While indeed it may present a picture of offering choice and “embracing diversity,” too often the architectural choices are already made. OpenStack fits the bill because it can be adapted to those fixed choices. Compliance issues remove the topic of adaptability from many customers’ lists of options.

Reconnoiter

This realization points to a new and potentially unsettling reality: Because OpenStack can both be highly customizable and highly pre-configured, at opposite ends of the consumption pyramid (precisely which end is the fatter one remains to be seen), the amount of commonality it presents across the entirety of its ecosystem is getting narrower. As Red Hat acknowledges, OpenStack is now perfectly capable as a deployment system for a container engine on bare metal, without any VMs involved. And as AT&T is proving, OpenStack can be sliced and diced into a system that enables network functions virtualization for the staging of workloads in the control plane, as well as the orchestration of traffic in the data plane — leaving behind OpenStack’s native network overlay scheme.

As a result, as OpenStack implementations scale up among enterprises, some are saying their individual architectures become workload-specific. Paul Murray, the technical lead for OpenStack’s Nova core compute component, and a lead engineer with HPE acknowledged as much during an HPE forum at OpenStack Summit.

At some point, the narrow creek needs to rejoin the raging river.

The threshold that implementations may cross to enter this narrow channel, Murray said, concerns continuous integration. He told an HPE story of how he and his fellow engineers re-staged a perfectly functional workload from a traditional OpenStack cloud into a CI/CD-oriented environment, to better automate it as it scaled up. The amount of data being exchanged between services in a single VM, he said, catapulted dramatically. So the team ran a number of logged tests.

“The consequence of that was, the storage area network wasn’t specced for that quantity of data being shipped around,” he told attendees. “And it got saturated, and the results were that everything seems to be working fine, but within the VMs, they start to behave as if their disks are full. They’re getting read errors and write errors, and all sorts of things start to fall apart.

160428 OpenStack Summit 04 (Paul Murray panel)

“The point of that is, how you spec out your equipment to match the amount of I/O that’s going on, the amount of memory, the amount of disk that’s going to be required, network bandwidth, is all very critical. And you can get things skewed the wrong way and your system’s out of balance. And then you’re going to hit something that’s not going to work.”

Red Hat swears it will refrain from ever forking OpenStack into customer tier-centered versions. Instead, the company says, it will continue to partner with various providers who each address their customer segments.

But to the airline systems engineer — who admitted to the crowd that executives above his grade at companies everywhere have no idea what OpenStack even is, even if they happen to run OpenStack clouds — precisely how OpenStack is channeled may not matter a hill of beans. In the end, there may be a small business service that runs virtual machines on KVM hypervisors, a research facility that runs Docker on Kubernetes on bare metal, an airline that runs resource orchestration through a network overlay, and a telecom provider that runs service orchestration through network functions virtualization. The greatest common element between them may be that OpenStack serves as their deployment bootstrap.

At some point, the narrow creek needs to rejoin the raging river. As both Cody Hill’s and Paul Murray’s experiences attest, once it comes time to scale up and evolve, it will be up to the customer to navigate a way out of its narrow channel. That job will only be more difficult if the customer isn’t sure how it got into that channel to begin with.

Unification sounds good on paper, but it’s meaningless without a plan to bring people together into the unified system without them having to chart the course themselves.

Cisco, Cloud Foundry, Docker, HPE and VMWare are sponsors of The New Stack.

A newsletter digest of the week’s most important stories & analyses.