Now that OpenStack appears to have ascended to the role of the world’s principal provider of virtualized infrastructure for cloud services, can the same stack also orchestrate traffic for major data networks?
In a wide-ranging interview at the twice-annual OpenStack Summit in Austin, Texas, Red Hat’s general manager for OpenStack, Radhesh Balakrishnan, acknowledged that the time may have come to consider the merits of an OpenStack infrastructure that addresses the specific needs of telecommunications providers and high-bandwidth data traffic sources.
“I want to acknowledge that the central focuses of these two constituents — I will call everybody outside of ‘telco’ as ‘enterprise’ – are slightly different now,” stated Balakrishnan.
Slightly, but not entirely or even mostly. That said, Red Hat’s OpenStack leader does believe that network functions virtualization (NFV), in which virtual infrastructure hosts the delivery functions of high-speed networks, will present use cases for the enterprise in a “trickle-down” effect (affirming a term we had used in our question to him).
“If you look in terms of the features and functions that the telcos are asking for,” said Balakrishnan, “they revolve around performance, latency, and determinism. Of these three, performance and determinism clearly are things that enterprises will demand as well. They may not need it as urgently as the telcos, but they will need it in future.”
The New Determinism
Determinism is the ability to predict reliably the performance of workloads, especially at scale. It has been a feature of microprocessor engineering since its inception. Late last month in San Francisco, at Intel’s Cloud Day 2016 event, we discovered how the NASDAQ stock market had intentionally been operating two data centers: one for low-priority workloads it could safely virtualize, and the other for high-priority workloads that had to be run on bare metal, in order to ensure maximum determinism.
Virtualization renders the performance characteristics of larger workloads too variable to be reliably predicted. It’s the main reason why, up to now, telcos have been reluctant to consider OpenStack, or any other virtualization platform, including Docker, for any high-priority purpose. Why does the support of telcos matter, given that there are so few of them relative to, say, retail department stores? Because telco engineers are among the platform’s most important contributors, as evidenced Monday by AT&T’s receiving the platform’s fourth annual SuperUser award.
One of the key contributions such customers are making to the platform, said Balakrishnan, center around operational management. “That is going to have an amazing value-add to the enterprise customers too,” he told us.
“NFV-equals-telco is not true either,” he added. “Anybody who’s got some decent amount of traffic handling to do will benefit from NFV: some of the large banks, as well as some of the public sector accounts which have their own three-letter agencies, and their own private, secure networks — they’re not calling it NFV. They’re just saying, ‘I want to bring cloud into managing my network infrastructure.’ But they’re realizing that the benefits of the NFV focus in OpenStack will help them too.”
Infrastructure Can’t Solve Everything
One reason, telecom engineers have told us, that OpenStack has not been considered for hosting high-bandwidth workloads has been that the faster of its two networking components, Neutron, has not proved itself fast enough for their purposes. Balakrishnan acknowledged that OpenStack’s prior evolutionary path centered around Neutron for Layer 2 networking – moving data across physical links, as opposed to tunneling through virtual overlays (which should perhaps be called “underlays.”)
“Now, when you’re talking about more complex networking topologies,” he stated, “that’s an area where Neutron is going to get richer.” He added his belief that OpenDaylight was emerging as an open source alternative — not a complement — to OpenStack for this purpose, which is what prompted Red Hat to include OpenDaylight in its tech review release of OpenStack.
“It’s not that the door is shut, and there’s no answer for what you’re looking for — progress is already being made to address those use cases,” he said. But he cited another distinct difference between the level of innovation upon which OpenStack development concentrates (the infrastructure layer) and the virtual network functions layer, which sits just above infrastructure on the telco stack. And the evolution of that higher layer may be on a separate track altogether.
“OpenMANO is a very rich area for disruption,” the Red Hat GM told The New Stack, “because if look at the classical ETSI architecture, the lowest level is the infrastructure layer. That’s OpenStack. The VNF layer — the network function layer — that’s going to be a long cycle of re-innovation. Or it could stay proprietary, who knows? Then the layer above that is the management and orchestration layer, which becomes an interesting spot. I think that place is ripe right now for disruption to begin because that’s another lock-in point in the stack,” he said, using a phrase frequently used here to refer to certain single vendors’ proprietary objectives.
Balakrishnan’s comments begin an interesting period of introspection this week at OpenStack Summit, from which The New Stack is reporting all week.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.