TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
AI

Semantic Search with Amazon OpenSearch Serverless and Titan

This two-part tutorial series will walk you through the steps in implementing Retrieval Augmented Generation (RAG) based on Amazon Bedrock, Amazon Titan, and Amazon OpenSearch Serverless.
Dec 15th, 2023 6:00am by
Featued image for: Semantic Search with Amazon OpenSearch Serverless and Titan
Feature image by Mirko Fabian from Pixabay.

This two-part tutorial series will introduce the workflow and APIs needed to build a Q&A system; in this example, we’ll build our application based on the Academy Awards dataset from Kaggle. You will need an active Amazon Web Services (AWS) subscription to follow these steps.

We’ll be using Amazon Bedrock and Amazon Titan foundation models (FMs); Amazon has recently introduced the general availability of the embeddings models. With these two models, we can implement text generation, summarization, sentiment analysis, question answering and more.

This two-part tutorial series will walk you through the steps in implementing Retrieval Augmented Generation (RAG) based on Amazon Bedrock, Amazon Titan, and Amazon OpenSearch Serverless. RAG is a technique used for “grounding” a large language model (LLM) with information that is use-case specific and relevant.

You can find part two of the series here.

In the first part, we will set up the environment, convert the custom dataset into text embeddings, and ingest the embeddings into Amazon OpenSearch Serverless Vector DB, enabling us to implement RAG with Amazon Titan FMs.

Step 1: Configure the Environment

Let’s start by configuring the Python virtual environment with the required dependencies and modules.


Then, we will create a subdirectory to download and install the latest version of Boto3 and the AWS CLI updated for Bedrock.


Finally, let’s install other dependencies related to OpenSearch, Jupyter, and Pandas.

Step 2: Create the Amazon OpenSearch Serverless Collection and Index

In this step, we will provision the vector database for storing and searching the embeddings.

  1. Open the Amazon OpenSearch Service console at https://console.aws.amazon.com/aos/home.
  2. Choose Collections in the left navigation pane and choose Create collection.
  3. Name the collection oscars-collection.
  4. For collection type, choose Vector search.
  5. Under Security, select Easy create to streamline your security configuration.
  6. Choose Next.
  7. Review your collection settings and choose Submit.

It will take several minutes for the collection to be ready. You can track the status on the AWS Management Console.

Once the collection is active, create an index programmatically through the Boto3 SDK. For this, we need the endpoint associated with the collection, which is available in the console.

Launch a new Jupyter Notebook to run the code shown in this tutorial.

Initialize the OpenSearch client and create the index. Replace HOST with the correct value shown in the console.


We created an index based on the KNN and cosine similarity search algorithms. It has two properties – nominee_text and nominee_vector to store the text and the corresponding embeddings.

Step 3: Pre-processing the Dataset

Download the Oscar Award dataset from Kaggle and move the CSV file to a subdirectory named data. The dataset has all the categories, nominations, and winners of the Academy Awards from 1927 to 2023. I renamed the CSV file to oscars.csv Start by importing the Pandas library and loading the dataset:


The dataset is well-structured, with column headers and rows that represent the details of each category, including the name of the actor/technician, the film, and whether the nomination was won or lost.

Since we are most interested in awards related to 2023, let’s filter them and create a new Pandas dataframe. At the same time, we will also convert the category to lowercase while dropping the rows where the value of a film is blank.


With the filtered and cleansed dataset, let’s add a new column to the data frame that has an entire sentence representing a nomination. This complete sentence will be used to generate the text embeddings later.


Notice how we concatenate the values to generate a complete sentence. For example, the column “text” in the first two rows of the data frame has the below values:

Austin Butler got nominated under the category, actor in a leading role, for the film Elvis but did not win

Colin Farrell got nominated under the category, actor in a leading role, for the film The Banshees of Inisherin but did not win

Step 4: Generate Embeddings with Titan

Now that we have the text constructed from the dataset let’s convert it into word embeddings. This is a crucial step, as the tokens generated by the embedding model will help us perform a semantic search to retrieve the sentences from the dataset with similar meanings.

Let’s define a function to convert the input text into an embedding.


We can invoke this per row of the data frame and create a new column to store the embeddings.


We are ready to ingest the text and the corresponding embeddings into Amazon OpenSearch Serverless.

Step 5: Insert the Embeddings and Performing a Similarity Search

Let’s create a function to insert each row of the data frame into the vector database.


Invoking this function directly by calling the apply method on the data frame is easy.


Once the data is ingested, we are ready to perform the search. Let’s create a function that accepts a vector and returns text that’s similar to the input.


The line "_source": {"excludes": ["nominee_vector"]}, excludes the vector in the result, which saves bandwidth.

Let’s now run a query on the vector database. Before that, we need to convert our query to an embedding based on Titan.


Now, let’s pass the embedding to the search function and see the results.


As you can see, we got all the nominations related to sound and music from the vector database, along with the similarity score. By extracting the contents of the nominee_text element, we can construct the context that will be used to augment the prompt sent to the LLM.

In the next part of this tutorial, we will explore how to leverage the output to increase the accuracy of the Titan model’s response. Stay tuned.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.