For as long as there have been data centers, they have been designed around the CPU. Now, thanks to speedy non-volatile flash storage, that topology is changing, and it may have major repercussions to the IT industry, warned an article in the Association for Computing Machinery’s flagship publication Queue.
“The arrival of high-speed, non-volatile storage devices, typically referred to as Storage Class Memories (SCM), is likely the most significant architectural change that data center and software designers will face in the foreseeable future,” wrote Mihir Nanavati, Malte Schwarzkopf, Jake Wires, and Andrew Warfield. “Piles of existing enterprise datacenter infrastructure—hardware and software—are about to become useless (or, at least, very inefficient).”
As a result, SCM will require a rethinking of data center architecture, as well as app development, from the ground up, the authors assert.
Historically, data center systems have been designed around the idea that the CPUs are expensive and fast while storage is slow and cheap. So, keeping the CPUs busy as possible was the way to maximize data center investment. Applications had to be small enough to be kept on a server’s RAM, where they could be accessed as quickly as possible. Distributed caching provides a bit more breathing room, but there are still limits to even this approach.
But what if developers could build their apps so that they take up as much memory as they needed? How would that change their designs?
Such decisions are already starting to be made. SCMs are fast, and they are getting faster. In fact, they are outstripping CPUs on performance improvements and are closing in on inverting the I/O gap, where storage devices struggle to keep CPU’s busy.
“Today’s PCIe-based SCMs represent an astounding three-order-of-magnitude performance change relative to spinning disks (~100K I/O operations per second versus ~100),” the authors state. “For computer scientists, it is rare that the performance assumptions that we make about an underlying hardware component change by 1,000x or more.”
What does this mean? One day, the developer may no longer need to think about how to structure their programs to work in the small confines of memory in order to maintain high throughput.
SCMs will cause further disruption by their expensive nature, the article continued. SCMs cost about 25 times the traditional rotating platters we use today. To fully optimize a data center, a designer must plan on spending more on SCMs than on CPUs, and then make sure those SCMs idle as little as possible. “Non-volatile memory is in the process of replacing the CPU as the economic center of the data center,” the authors wrote.
The paper goes on to describe some of the work that needs to be done to address this shift. Systems must be rebalanced. “With SCMs, the bottleneck can easily shift from disk to CPU,” they note.
Understandably, storage vendors, especially those specializing in the solid state models, are quite enthused about the changes afoot.
“SCM is most certainly set to change how we view and use IT today. The increase in capacity and performance that will be available to a server will drastically change how we develop software,” agreed John Griffiths, who is a software engineer for flash storage purveyor SolidFire and the technical lead for the OpenStack Cinder (BlockStorage) project, in an e-mail. “This is both a challenge and an opportunity for storage architectures.”
Like SolidFire, IBM is also banking on this shift. “IBM has foreseen this change and started the transformation years ago,” Vincent Hsu, who is an IBM fellow and vice president and chief technology officer for storage, informed us in an e-mail.
Hsu noted that the company has been a big investor in SCM research, particularly around Phase-Change Memory, which looks to put today’s solid state memory speeds to shame.
Should be interesting times.
IBM is a sponsor of The New Stack.
Feature Image: Bushwick street art, New York.