TNS
VOXPOP
Will real-time data processing replace batch processing?
At Confluent's user conference, Kafka co-creator Jay Kreps argued that stream processing would eventually supplant traditional methods of batch processing altogether.
Absolutely: Businesses operate in real-time and are looking to move their IT systems to real-time capabilities.
0%
Eventually: Enterprises will adopt technology slowly, so batch processing will be around for several more years.
0%
No way: Stream processing is a niche, and there will always be cases where batch processing is the only option.
0%
AI / Large Language Models

Freshen up LLMs with ‘Retrieval Augmented Generation’

Large language models like GPT are trained offline on large corpus data. This makes models ignorant of any data generated after they are trained. Here's how to update them.
Jul 14th, 2023 3:00am by
Featued image for: Freshen up LLMs with ‘Retrieval Augmented Generation’
Feature image by Image by Penny from Pixabay.  
   

The foundation models, including large language models (LLMs) like GPT, are typically trained offline on large corpus data. This makes models ignorant of any data generated after they are trained.

Furthermore, because foundation models are trained on publicly available, general corpus data, they are less effective for domain-specific tasks. Retrieval Augmented Generation (RAG) is a technique to retrieve data from outside a foundation model to augment the prompts by injecting the relevant retrieved data into the context.

RAG is more cost-effective and efficient than pre-training or fine-tuning foundation models. It is one of the techniques used for “grounding” the LLMs with information that is use-case specific and relevant to ensure the quality and accuracy of the responses. This is critical to reducing the hallucinations in LLMs.

In this article, we will take a closer look at implementing RAG with LLMs to bring domain-specific knowledge.

Why Implement RAG?

Let’s consider a simple scenario where you ask ChatGPT a question about the 95th Academy Awards. Since the announcements were made in March 2023 and the training cutoff date for ChatGPT was September 2021, you get a typical apologetic response.

However, if you give ChatGPT some context before asking the same question, it will be able to respond with a meaningful answer.

Let’s copy and paste the blurb from the Good Morning America website related to the 95th Academy Awards, which “injects” additional context into the prompt.

Our prompt now reads as follows:

As we can see, that simple blurb made a significant difference in the LLM’s response to the question. Depending on the supported context length, we can feed additional information to the LLM to make it knowledgeable about a specific topic.

Despite the fact that we manually copied and pasted, we essentially implemented a rudimentary RAG mechanism to get what we wanted from ChatGPT.

In an enterprise environment, LLMs may require information to be retrieved from various unstructured and structured data sources. As a result, copying and pasting context to supplement a prompt is not a viable option. This is where RAG provides a framework and a blueprint to build domain-specific, production-grade LLM applications.

The RAG Framework

External data used to augment the prompts in RAG can come from diverse sources, including document repositories, databases, and APIs.

Step 1: The Prompt
The user is providing the prompt for the chatbot for the first time in this interaction. The prompt may contain a brief description of what the user expects in the output.

Step 2: Contextual Search
This is the most crucial step, where the prompt is augmented with the help of an external program responsible for searching and retrieving contextual information from external data sources. This may include querying a relational database, searching a set of indexed documents based on a keyword, or invoking an API to retrieve data from remote or external data sources.

Step 3: Prompt Augmentation
Once the context is generated, it gets injected into the original prompt to augment it. There is now additional information that contains factual data added to the user’s query.

Step 4: Inference
The LLM receives a rich prompt with the additional context and the original query sent by the user. This significantly increases the model’s accuracy because it can access factual data.

Step 5: Response
The LLM sends the response back to the chatbot with factually correct information.

The Role of Word Embeddings and Vector Databases in RAG

While the above framework explains the high-level approach to implementing RAG, it doesn’t discuss the implementation details.

One of the crucial steps in performing search and retrieval is performing a semantic search based on the input query to filter words and sentences with similar meanings. To do this, we must leverage a word embedding model, which converts text into a set of vectors. When the source data and the prompt are vectorized based on the same word embedding model, we can perform a semantic search to match sentences and phrases with similar meanings.

Since converting a large corpus of data into vectors based on word embedding for each query is expensive, generating them once and storing them in a database is a good idea. Vector databases are a new database category that stores vectors and performs similarity searches. As new documents and databases are added to the pipeline, they can be converted into vectors and stored in the vector database.

In the next part of this series, we will implement RAG to augment prompts sent to OpenAI. Stay tuned.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, SingleStore.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.