Modal Title
Edge Computing / Machine Learning

How NVMe Will Propel Innovations in Artificial Intelligence

Jun 3rd, 2019 10:19am by
Featued image for: How NVMe Will Propel Innovations in Artificial Intelligence

Narayan Venkat
Narayan Venkat is the Vice President of Marketing for the Data Center Systems (DCS) business unit at Western Digital Corporation and is responsible for the planning, development and execution of a broad range of marketing activities that include product marketing, positioning, messaging, demand generation and customer training covering platforms, IntelliFlash and ActiveScale systems. He joined Western Digital as part of the Tegile acquisition. Prior to joining Western Digital, Venkat has held senior management positions with these companies. He received his MBA degree from the University of Chicago’s Booth School of Business and a Master’s of Science in Electrical Engineering from Utah State University.

As today’s innovators experiment with artificial intelligence (AI)  in a myriad of applications such as intelligent virtual assistants, cybersecurity analysis, facial recognition, and market prediction, there’s no question AI is a data-intensive proposition. Most AI applications rely on technologies like Natural Language Processing to more advanced machine learning (ML) and deep learning (DL). Using these technologies computer systems can be trained to sift through massive amounts of data to identify and recognize patterns and apply the learning for better outcomes. Enterprises are increasingly turning to NVMe (non-volatile memory express) as an essential infrastructure technology to accelerate their AI initiatives.

NVMe is the first storage protocol that enhances performance between the CPU and storage so that online transactions can be processed even faster while performing real-time analysis of the data that can benefit the business. The faster the storage, the quicker the data processing. The NVMe protocol capitalizes on parallel, low latency data paths to the underlying media, similar to high-performance processor architectures.

NVMe, offering high performance and low latency, can help accelerate the AI/ML data flow wherever responsiveness is most important. For example, NVMe can accelerate data ingestion and model training. As huge volumes of data are ingested and ultimately used to train AI models, a storage solution must be able to support high throughput in sequential order, but then be able to handle random access when latency becomes important. Similarly, in the inference stage, where responsiveness to data and decision making are critical to the user experience, NVMe delivers low latency. Consider an application like a virtual digital assistant. The applications must respond quickly when asked a question.

As organizations increase the adoption of AI/ML models and put more applications into production, the scale of data storage and processing increases exponentially. Data access patterns vary by the type of data, by the stage in the pipeline, by the number of users and the number of models in production. The speed of data access and processing becomes more critical as data moves through the pipeline. The choice of data storage becomes crucial as well in the various stages of the AI/ML production. As scale increases with more models, latency and speed becomes paramount.

IT organizations putting AI applications into production will be better equipped to deliver a higher quality of customer experience by deploying NVMe storage in the AI/ML data processing pipeline. For the best solution, consider an all-flash array that combines the performance of NVMe, software-defined flash management, and rich data services to deliver high-performance shared storage that accelerates enterprise applications such as early AI implementations.

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.