What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
Super-fast S3 Express storage.
New Graviton 4 processor instances.
Emily Freeman leaving AWS.
I don't use AWS, so none of this will affect me.
AI / Storage

Pliops to Customize Fast Solid State Storage to AI/ML Workloads

Jan 31st, 2019 12:55pm by
Featued image for: Pliops to Customize Fast Solid State Storage to AI/ML Workloads

Machine learning (ML)  and other more advanced software development will require computing power hardware makers are scrambling to deliver — especially ahead of forecasted skyrocketing demand for ML applications.

To that end, a group of investors, including chip giant Intel, lead by Softbank has invested $30 million in Israel-based Pliops, a storage processor maker.

Developers are expected to be able to take advantage of Pliops storage technology by end of the year when the device is slated to launch, the company said.

Among other things, the technology will support ML software storage stacks and will be on offer in a number of ways, including being available as a service for cloud native developers and applications, Steve Fingerhut, president and chief business officer at Pliops, said.

Recently, organizations have run into bottlenecks while developing and running ML applications on standard CPUs and servers. However, there is a growing gap in accommodating the resulting explosion in data, especially on the cloud.

“There are many software elements involved in the data management in the cloud and those two lines are diverging it,” Fingerhut said. “It’s starting to really drive a lot of pain and sprawl in the data center. We’re not just offloading and running it on ARM — it’s dedicated purpose-built product to accelerate those higher layers of data management.”

The necessity has also emerged for hardware that is more conducive to pairing highly intensive computing required for machine learning and other advanced applications with storage, particularly with flash memory. In this way, Pliops technology is designed to remove abstraction layers that slow down the performance of flash memory (which stores the bits in an SSD), Jim Handy, an analyst for Objective Analysis, said.

“Before we had SSDs nobody cared too much about how slow these layers were because they contributed only a tiny fraction to the overall delays of accessing data on HDDs. Now, that SSDs are about 1,000 times faster than an HDD these layers contribute significantly to the overall delay,” Handy said. “By removing them computing can perform faster at the same cost. So, in a nutshell, Pliops makes computers faster without much-added cost.”

From the outset, Fingerhut said Pliops’ R&D team looked at major portions of code and the functions that were being performed. “So,  we said, ‘okay, they have APIs that handle VIO and the data management. We’re going to provide those same APIs but we’re going to perform the functions in a completely different way,” Fingerhut said. “[The functions will become] highly efficient, optimized for hardware where you have multiple layers and a software stack going all the way down to the drive — while eliminating all redundancies and implementing it in the most efficient way from a clean sheet of paper perspective.”

The timing of the development is good, Torsten Volk, an analyst for Enterprise Management Associates (EMA), said.

“In a nutshell, NAND storage scales in a linear manner, while storage requirements for AI/ML are just at the beginning of an exponential growth period, fueled by AI/ML technologies becoming more and more available to mainstream users,” Volk said.

In many respects, Pliops and others are addressing a very real problem for software stack development, and in Pliops case, for storage, as AI and ML application development takes off.

“My personal estimate is that currently less than 1 percent of AI/ML projects get off the ground due to AI/ML technologies being too difficult to deploy and successfully operate,” Volk said. “As AI/ML begins to become available to ultimately everyone within an enterprise, instead of training a few AI/ML models every month, enterprises might train thousands of models, each one of them with significant storage requirements in terms of capacity and performance.”

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.