TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Data

SingleStore Offers Fast Vector Processing for Real-Time AI

With an in-memory database, AI jobs that have taken 40 hours or more to complete now could be done in less than a minute, according to the company.
Oct 17th, 2023 9:25am by
Featued image for: SingleStore Offers Fast Vector Processing for Real-Time AI
SingleStore CEO Raj Verma

The SingleStore (formerly MemSQL) distributed relational SQL database management system will soon have a dedicated data type to do vector processing as well, setting the stage for real-time artificial intelligence-based analysis, according to the company.

The company, of the same name, is debuting the capabilities at the SingleStore Now: The Real-Time AI Conference, being held Tuesday in San Francisco.

As a result of these new capabilities, AI jobs that have taken 40 hours or more to complete now can be done in less than a minute, said Madhukar Kumar, SingleStore chief marketing officer in an interview with The New Stack.

Uber, Comcast, Hulu, Siemens, Morgan Stanley, Goldman Sachs and others are already using SingleStore for AI-based analysis.

MemSQL was originally designed as a row-oriented database designed to run entirely in-memory, for fast response. Disk-based column storage was also added as an option soon after, opening the possibility of using the database system for data analysis as well (Hence the name SingleStore, which refers to being able to do both transactional and analytics).

With the newly-added dedicated vector processing data type, SingleStore is targeting large organizations that are doing early-stage research into creating their own large language models (LLMs) to respond dynamically to customer preferences and market dynamics. The biggest frustration from project leaders, thus far, has been the cumbersome computationally-expensive process of managing models, Kumar said.

Here, a scalable in-memory database, with SQL interface and vector capabilities could speed the AI process, the company argues.

Vector processing is valuable in contextual search, Kumar explained, where the similarity among related objects can be identified across thousands of dimensions of data.

Today, much of the community does semantic search through Hierarchical Navigable Small World (HNSW) graphs, a capability provided in the Facebook AI Similarity Search (Faiss) library. HNSW is good for accuracy, but is computationally expensive.

SingleStore’s new data type, based on an inverted index has structure, can compress even very large vectors so they can fit into the working memory. “So now that your data is much smaller, like orders of magnitude, you can learn it in memory, which makes it 900 times faster,” Kumar said. (Previously vectors were stored on SingleStore as JSON blobs).

The company also has plans to release SingleStore Aura, a “layer of intelligence” that can run aside the data to further speed processing, Kumar said.

Check back to this post through the day for more updates from the conference:

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, The New Stack, SingleStore, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.