TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Data / Large Language Models

How Large Language Models Fuel the Rise of Vector Databases

Relational databases have served us well for a long time, but they have limitations when it comes to handling unstructured data such as text, images, and voice, which form the majority of the data generated today.
Jun 16th, 2023 8:33am by
Featued image for: How Large Language Models Fuel the Rise of Vector Databases
Image by Paul Brennan from Pixabay.

Language Models, specifically Large Language Models (LLMs) like GPT-4 and LLaMa, are playing a key role in shaping the future of data management, specifically driving the adoption of a new breed of database called the vector database.

LLMs are being used to draw insights from massive data sets, and they are introducing a paradigm shift in how we store, manage, and retrieve data. This article, which is a part of the generative AI series, will explore the relationship between LLMs and vector databases and how they become key components of the enterprise generative AI stack.

The Shift from Traditional Databases

Traditional databases like relational databases (RDBMS) have served us well for a long time, but they have limitations when it comes to handling unstructured data such as text, images, and voice, which form the majority of the data generated today. The need for efficient handling of high-dimensional data is causing a significant shift towards vector databases, a type of NoSQL database designed to handle large and complex data types effectively.

What Is a Vector Database?

Vector databases are specifically designed to store and manage high-dimensional data, like vectors. A vector is an array of numbers, and in a generative AI context, it can represent complex data types such as text, images, voice, and even structured data.

Vector databases have advanced indexing and search algorithms that make them particularly efficient for similarity searches, a technique of searching for items most similar to a given item. This is one of the key requirements for augmenting prompts through contextual data in generative AI.

Simply put, a vector is an ordered list of numerical values or variables. The elements of this list are called the components of the vector.

In a simple two-dimensional space, a vector could be represented as follows:


Here, “x” and “y” are the components of the vector “v.” The first component, “x”, represents the x-coordinate, and the second component, “y”, represents the y-coordinate in a two-dimensional Cartesian coordinate system.

We can easily extend this to a three-dimensional space:


In this case, “a”, “b”, and “c” represent the x, y, and z coordinates, respectively.

This idea can be generalized to n-dimensional space for n components.

When used in the context of data analysis or machine learning, a vector could represent a data point in n-dimensional space, where each component represents a specific feature or attribute of the data.

In natural language processing, a vector can represent a word or a piece of text. Here, each dimension could correspond to a particular context or semantic meaning captured by a language model.

For instance, the word “bank” might be represented as a 300-dimensional vector in a word embedding model, with each dimension capturing some aspect of the meaning or usage of “bank.” The vector’s dimensions help us perform a quick semantic search that can easily differentiate between the phrases “river bank” and “cash in the bank.”

The Intersection of LLMs and Vector Databases

LLMs, like GPT-4, are proficient in understanding and generating human-like text. They turn text into high-dimensional vectors (also known as embeddings) that capture the semantic meaning of the text. This transformation makes it possible to perform complex operations on text, like finding similar words, sentences, or documents, which are integral to many applications such as chatbots, recommendation engines, and more.

The nature of these vector representations requires an efficient storage solution that can handle indexing and querying the embeddings, which is where vector databases come in. They store these high-dimensional vectors and allow for efficient similarity searches, making them an ideal choice for LLM-based applications.

Vector databases can measure the distance between two vectors, which defines their relationship. Small distances suggest high relatedness, while larger distances suggest low relatedness.

Though traditional NLP models relied on techniques such as Word2Vec,  Global Vectors for Word Representation(GloVe), transformer models like GPT-3 create contextualized word embeddings. This means that the word embedding for a word can change based on the context in which it is used.

Each LLM uses a different mechanism to generate the word embeddings/vectors. For example, OpenAI’s text-embedding-ada-002 model is meant to generate the word embeddings used with the text-davinci-003 and gpt-turbo-35 model variants. Similarly, Google’s PaLM 2 uses the embedding-gecko-001 model to generate the embedding vectors.

How Vector Databases Enhance the Memory of LLMs

As discussed in the last article, LLMs tend to hallucinate when they lack context. Though context injection is an integral part of prompt augmentation, models have a limited token size as input. This means that we cannot embed large amounts of text in the prompt.

By encoding data stored in unstructured formats such as PDF, MS Word documents, and other formats into word embeddings and storing them in a vector database, we can perform a semantic search to retrieve just enough data needed by the prompt. Since querying a vector database is much faster than encoding a large document into word embeddings/vectors, it significantly speeds up the process. The below diagram illustrates the workflow involved in using vector databases for context injection and prompt augmentation.

In the next part of this series, we will explore the integration of vector databases with LLMs through a hands-on tutorial. Stay tuned.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Simply.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.