During the Tokyo OpenStack Summit, The New Stack hosted a Bento Box Luncheon to discuss “What Makes OpenStack Thrive in the Enterprise?” which we recorded for this edition of The New Stack Analysts podcast. Here we learned about Intel’s humongous OpenStack deployment, and how the company’s Clear Containers project came about.
Joining The New Stack’s Alex Williams were Ruchi Bhargava, director of Intel’s Datacenter and Cloud Software, Open Source Technology Center, and Allyson Klein, director of Initiative and Leadership Marketing for Intel’s Datacenter Group.
Listen to all TNS podcasts on Simplecast.
This podcast is also available on YouTube.
In order to meet their internal need for self-service, massively-scalable infrastructure, Intel started deploying an OpenStack-based private cloud in 2012.
“The previous version of our private cloud was based on a proprietary solution,” Bhargava recalled. “Any changes which we needed took forever to implement. We were versions behind.”
Intel chose OpenStack from what was available in the market at that time. “It probably was the right decision,” Bhargava said, applauding OpenStack for the ease of putting together quick implementations for developers. “When we put together our first pilot version of infrastructure as a service based on OpenStack, people could consume it as soon as we put it out,” she said. “For us, the problem was capacity management.”
Another problem was how to integrate Intel’s existing brownfield infrastructure comprising 13,000 to 15,000 virtual machines. Also, their desire to create a hybrid cloud, despite the lack of federated identity in OpenStack at the time, led to some workarounds for Intel IT. “We sculpted by having multiple, non-federated implementations,” Bhargava explained.
Since then, federated identity has become available in OpenStack, and Bhargava cited other improvements that the OpenStack community has worked to bring about — enterprise essentials around what she called “the four pillars” — availability, manageability, maintainability and deployability.
When asked what changes she has seen in the community, Klein said there has been a shift, “from just technology innovation, to how to drive business from OpenStack.”
“A few years ago we were not thinking in terms of a billion-dollar-plus business growing to multi-billion-dollar business in 2017,” said Klein.
“A couple of years ago,” said Bhargava, “people were comparing OpenStack as an alternative to either Amazon or Google,” the primary use case being quick infrastructure for testing cloud-native apps. “But they never ran it for enterprise production apps.”
“Over the years, they have built a pretty strong solution for the enterprise. It took us how many years to go from five- to ten-percent virtualized infrastructure, to 95 percent in a lot of enterprises? For traditional applications which were not designed to run for the cloud?” Bhargava asked. “And what I mean by ‘not designed to run for the cloud’ is that they were not designed to run for failure. That is a big change. Now, a lot of enterprises are saying, ‘Any new app, let’s design it for failure.'”
The topic of high-availability and quick deployment led to a discussion of containers, and of Intel’s Clear Containers, which Bhargava described as “the container use case with the security of a virtual machine,” or, a highly-optimized VM enabling fast deployments. Intel innovation is also focusing on the major gaps that remain for enterprise deployments of OpenStack, such as high-availability, rolling upgrades and quick deployment tools, they said.
Klein referred to the recently-launched collaboration with Rackspace, the OpenStack Innovation Center, which is open to the entire community. “We’re also doing some deep collaborations with Mirantis,” said Klein, “bringing more resiliency and more enterprise capabilities into the OpenStack platform.”
“In working with Mirantis, in working with Rackspace, we’ve put together a joint road map,” said Bhargava. “We should have these gaps plugged within the Mitaka release or, at the latest, by the N release.”
“Once we have these plugged, then we start looking at the little, next-level-down problems.”
Feature image via Pixabay.