Intel Partners with CoreOS, Mirantis to Build a ‘Universal Resource Scheduler’
Will containerization and OpenStack finally penetrate the realm of massive-scale, real-time workloads? In an announcement Thursday morning at a media and investors’ event in San Francisco, Intel unveiled the next stage of its partnership with containerization platform provider CoreOS and OpenStack platform maker Mirantis, with the objective of finally breaking that barrier.
“We are creating a new effort to make sure that we’re bringing together the best of several worlds,” said Jason Waxman, the corporate VP and general manager for Intel’s Cloud Platforms Group, “to create something that is a Universal Resource Scheduler that can support both VMs as well as containers.”
The Drive to Determinism
One of the biggest barriers to the adoption of highly scalable workloads by the world’s major data centers has dealt with determinism – the predictability and manageability of the flow of those workloads in real-time. Financial institutions, high-end health care providers, and telecommunications firms have held off on adopting containerization and even ordinary virtualization, because the added layer of abstraction creates a degree of unpredictability that, at massive scales, renders scaling real-time workloads with multiple tenants to thousands of servers impractical.
CoreOS CEO Alex Polvi told attendees of Intel’s Cloud Day event that he perceives customers as needing a broad, horizontal stack that encompasses all of workload deployment, including Kubernetes as the management layer, and OpenStack as the infrastructure-as-a-service provider. “This is for these companies that want one infrastructure to rule them all,” said Polvi, “to give them the benefits of both containers and virtual machines.”
The three members of this partnership did not disclose technical details of the scheduler at this early date. But based on what we’ve seen before with Intel’s Clear Container initiative, coupled with new Intel-specific announcements made today, it’s clear that Intel wants an inroad for its hardware-based virtualization technology, Intel VT. Originally designed to provide microcoded hardware resources to hypervisors, skipping over the operating system entirely, VT has an opportunity to provide resource scheduling benefits to containerization platforms as well, especially CoreOS’ own Tectonic.
Clear Containers were designed to make use of VT, but with the container industry standardizing around OCI, there’s a danger from Intel’s perspective that Clear Containers may always be perceived as an “alternative” container system. CoreOS knows what that position feels like quite well.
So this new partnership may lead to a way for Mirantis OpenStack to host Kubernetes and orchestrate all forms of containerized workloads, including CoreOS’ rkt and Intel Clear Containers, in a manner that’s, at least, copacetic with VM workloads on the same servers. A newly announced like of Intel Xeon server processors would include on-die features enabling orchestrators to regulate access to those processors’ cache memory. This would lead to better scalability of more deterministic workloads — the real-time class that customers like NASDAQ require.
The End of Jitter
“In the last ten years, it is quite apparent that the sensitivity of how quickly you can do work on these transactions has dramatically changed,” said NASDAQ principal technical architect Sandeep Rao. “When I started [19 years ago], people would expect two-second response times; now it is like tens of microseconds.
“One of the reasons why financial services actually had not moved into virtualization on their core platforms is because the hypervisor provides a layer of non-determinism,” said Rao [pictured at right above].
When a workload is virtualized, its performance profile is subject to what engineers call jitter. It’s a situation where the time expended for a process fails to stay stable over multiple iterations. So it becomes impossible for an operation like NASDAQ to reliable orchestrate a process that checks the variations in market values over any arbitrary unit of time — for example, 100 or 1000 cycles — because that cycle time becomes a co-efficient of the jitter variable. Who knows what it is you may be multiplying?
As a result, Rao explained, NASDAQ would run its operations on separate data centers — one for less critical workloads such as everyday accounting, and another for highly critical workloads like market value assessment. This dualism was costing the exchange provider serious money, which Rao said he believes can now be saved, by virtue of its test deployment of Intel’s Xeon E5 v4 processors (version 4 of the company’s E5 product line).
These E5 v4 processors will implement a curious technology with which the company began experimenting for its Xeon E5 v3 line. Called Resource Director, the technology allows orchestrators to issue calls to the processor to partition processors’ caches so that virtual machines — and now containerization platforms — don’t over-utilize memory caches. This over-utilization triggers the “cache misses” that lead to non-deterministic workloads — the very phenomenon which NASDAQ says it cannot allow in its data centers.
Rao told attendees that Xeon E5 v4 is stable enough at this point for NASDAQ to officially utilize the product line for a single, uniform data center capable of scheduling all types of workloads on one platform.
The Virtuous Cycle
Here is where the CoreOS / Mirantis / Intel partnership enters the picture. To accomplish what NASDAQ requires now on a massive scale with predictability, a uniform scheduler standard is required: one that bypasses any single vendor’s proprietary or preferred mechanism for deploying workloads at scale.
“One of the efforts that’s most concerning to me right now, and is most important to the efficiency of cloud computing, is a Universal Resource Scheduler,” said Intel’s Waxman.
“There’s a lot of interesting containers, and there are newer companies into virtual machines, and they need a scheduler that’s efficient to be able to place workloads and automate their clouds. And without an efficient, truly open scheduler, the industry’s not going to progress as fast as it needs to.”
In order for mass adoption of containerization on OpenStack to happen, said Mirantis CEO Alex Freedland, “it needs to be simple and easy for customers to consume. Ultimately, the value for customers for this [URS] introduction will be one platform that can run containers and bare metal and VMs on one platform, and scale without having to add resources to manage them.
“You have to make sure that it’s very simplified lifecycle management,” added Freedland, citing statistics from AT&T that some 50 percent of its data center spend is composed of lifecycle management.
Making an intriguing observation Thursday, Waxman cited an observation made by Intel co-founder Andy Grove (who passed away earlier this month at age 79). Grove pointed out the virtuous cycle of organizations contributing to the process of creating standards, which in turn helps drive people to work with the organizations who build standards. But Waxman noted this process may need a jump-start every now and then.
“Even though open source is one of those great enablers of standards and a great enabler of this virtuous cycle,” said Waxman, “we still see that there are cases where people are holding back from open source, that they’re not putting out the full features and capabilities that really enable [standardization] to move forward. And at Intel, we’re not going to sit by and watch idly as this sort of thing happens. We have to be involved; we have to create new efforts.”