Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
Data / Large Language Models / Software Development

LlamaIndex and the New World of LLM Orchestration Frameworks

We take a look at LlamaIndex, which allows you to combine your own custom data with an LLM — without using fine-tuning or overly long prompts.
Jul 6th, 2023 6:47am by
Featued image for: LlamaIndex and the New World of LLM Orchestration Frameworks

What if you could combine your own private data store with a large language model (LLM) like OpenAI’s GPT, and query it programmatically? That’s the promise of LlamaIndex, a new framework that helps developers avoid fine-tuning and overly long prompts. It’s part of an emerging category of LLM application tools that some are calling “orchestration frameworks” — or even more simply, “programming frameworks” for LLMs.

In a recent blog post, the venture capital firm Andreessen Horowitz (a16z) makes the case that both LlamaIndex and LangChain are orchestration frameworks. a16z positions both projects firmly in the center of its “emerging LLM app stack”:


Click image to view full-size

According to a16z, orchestration frameworks like LangChain and LlamaIndex “abstract away many of the details of prompt chaining,” which means querying and managing data between an application and the LLM(s). Included in this orchestration process is interfacing with external APIs, retrieving contextual data from vector databases, and maintaining memory across multiple LLM calls.

LangChain is the leader among orchestration frameworks, says a16z. So what does LlamaIndex offer? Let’s take a look.

How LlamaIndex Works

The key to LlamaIndex is that it allows you to combine your own custom data with an LLM, without using fine-tuning (training the LLM itself) or adding the custom data to your prompt (known as “in-context learning”).

LlamaIndex refers to itself as a data framework. It’s a “simple, flexible data framework for connecting custom data sources to large language models.” It appears to cover just about any type of data too, according to this diagram on its homepage:


As with LangChain, LlamaIndex is still a new and not entirely finished framework. Just this week (on Independence Day, in fact), the project released its 0.7.0 version. According to LlamaIndex creator Jerry Liu, 0.7.0 “continues the theme of improving modularity/customizability at the lower level to enable bottoms-up development of LLM applications over your data.”

Like LangChain, LlamaIndex is almost shockingly new on the scene. It was launched by Liu as an open source project called GPT Index in November last year. Sometime this year, the project name changed to LlamaIndex. Then, again similar to LangChain, Jerry Liu spun the project into a venture-funded company (also named LlamaIndex). This happened just last month when Liu noted that it aimed to “offer a toolkit to help set up the data architecture for LLM apps.”

The key to getting started in LlamaIndex is LlamaHub, which is where data is ingested. Ravi Theja provided this useful diagram in a recent presentation:


LlamaHub is a library of data loaders and readers. Interestingly, it’s not limited to use with LlamaIndex — it can also be used with LangChain. There are loaders “to parse Google Docs, SQL Databases, PDF files, PowerPoints, Notion, Slack, Obsidian, and many more.”

After the data ingestion stage, there is a typical workflow that users of LlamaIndex follow:

  1. Parse the Documents into Nodes
  2. Construct Index (from Nodes or Documents)
  3. [Optional, Advanced] Building indices on top of other indices
  4. Query the index

The querying part is done by an LLM. Or as the documentation puts it, “a ‘query’ is simply an input to an LLM.” This is where it can get complex, but here’s how the documentation outlines the “query” process:

Querying an index or a graph involves three main components:

  • Retrievers: A retriever class retrieves a set of Nodes from an index given a query.
  • Response Synthesizer: This class takes in a set of Nodes and synthesizes an answer given a query.
  • Query Engine: This class takes in a query and returns a Response object. It can make use of Retrievers and Response Synthesizer modules under the hood.


The simplest explanation I’ve found for the query process is by Owen Fraser-Green, who said that LlamaIndex basically allows you to “feed relevant information into the prompt of an LLM,” only instead of feeding the LLM all of your custom data, “you try to pick out the bits useful to each query.”

In terms of how to do this, there are multiple methods. You can use good old ChatGPT, as this tutorial demonstrates. But also you can use LangChain. LlamaIndex allows you to use any data loader as a LangChain Tool, as well as providing “Tool abstractions so that you can use a LlamaIndex query engine along with a Langchain agent.”

One of the tutorials offered by LlamaIndex shows how to build a “context-augmented chatbot” using both LangChain and LlamaIndex. “We use Langchain for the underlying Agent/Chatbot abstractions, and we use LlamaIndex for the data retrieval/lookup/querying,” the documentation explains.


It’s clear that LlamaIndex is more of a data management framework than the all-purpose framework that LangChain provides. But the beauty of LlamaIndex is that it can be used with LangChain. They’re compatible with each other, not competitive.

Whether or not a16z’s term of “orchestration framework” sticks, one thing is for sure: both LlamaIndex and LangChain are tools that developers should have in their back pocket when working with LLMs.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.