Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
AI / Frontend Development / Open Source

LangChain: The Trendiest Web Framework of 2023, Thanks to AI

We look at what JavaScript developers need to know about LangChain, the fast-rising LLM application framework created by Harrison Chase.
Jun 1st, 2023 10:57am by
Featued image for: LangChain: The Trendiest Web Framework of 2023, Thanks to AI

LangChain is a programming framework for using large language models (LLMs) in applications. Like everything in generative AI, things have moved incredibly fast for the project. It started out as a Python tool in October 2022, then in February added TypeScript support. By April, it supported multiple JavaScript environments, including Node.js, browsers, Cloudflare Workers, Vercel/Next.js, Deno, and Supabase Edge Functions.

So what do JavaScript developers (in particular) need to know about LangChain — and indeed about working with LLMs in general? In this post, we aim to answer that question by analyzing two recent presentations by LangChain creator Harrison Chase.

LangChain began as an open source project, but once the GitHub stars began piling up it was promptly spun into a startup. It’s been a meteoric rise for Harrison Chase, who was studying at Harvard University as recently as 2017, but is now CEO of one of the hottest startups in Silicon Valley. Earlier this month, Microsoft Chief Technology Officer Kevin Scott gave Chase a personal shout-out during his Build keynote.

Chat Apps All the Rage

Unsurprisingly, the main use case for LangChain currently is to build chat-based applications on top of LLMs (especially ChatGPT). As Tyler McGinnis from the popular newsletter wryly remarked about LangChain, “one can never have enough chat interfaces.”

In an interview with Charles Frye earlier this year, Chase said that the best use case right now is “chat over your documents.” LangChain offers other functionality to enhance the chat experience for apps, such as streaming — which in an LLM context means returning the output of the LLM token by token, instead of all at once.

However, Chase indicated that other interfaces will quickly evolve.

“Long term, there’s probably better UX’s than chat,” he said. “But I think at the moment that’s the immediate thing that you can stand up super-easily, without a lot of extra work. In six months, do I expect chat to be the best UX? Probably not. But I think right now, what’s the thing that you can build at the moment to deliver value, it’s probably that [i.e. chat].”

Given that developing applications with LLMs is such a new thing, startups like LangChain have been scrambling to come up with tools to help navigate some of the issues with LLMs. With prompt engineering, for example, Chase indicated that it still mostly comes down to the developer’s intuition on which prompts work better. But LangChain has introduced features like “tracing” this year to help with that.


One of LangChain’s more recent features is “custom agents,” which Chase talked about at the Full Stack LLM Bootcamp, held in April in San Francisco. He defined agents as a method of “using the language model as a reasoning engine,” to determine how to interact with the outside world based on user input.

why use agents

Harrison Chase at the LLM Bootcamp.

He gave an example of interacting with a SQL database, explaining that typically you have a natural language query and a language model will convert that to a SQL query. You can execute that query and pass the result back to the language model, ask it to synthesize it with respect to the original question, and you end up with what Chase called “this natural language wrapper around a SQL database.”

Where agents come in is handling what Chase termed “the edge cases,” which could be (for instance) an LLM hallucinating part of its output at any time during the above example.

“You use the LLM that’s the agent to choose a tool to use, and also the input to that tool,” he explained. “You then […] take that action, you get back an observation, and then you feed that back into the language model. And you kind of continue doing this until a stopping condition is met.”

Typical implementation

Implementing agents.

One popular approach to agents is called “ReAct.” This has nothing to do with the popular JavaScript framework of the same name; this version of “ReAct” stands for Reason + Act. Chase said this process yields “higher quality, more reliable results” than other forms of prompt engineering.


ReAct (not React)

Chase admitted that “there are a lot of challenges” with agents, and that “most agents are not amazingly production ready at the moment.”

The Memory Problem

Some of the issues he listed seem like basic computer concepts, but they are more challenging in the context of LLMs. For instance, LLMs usually don’t have long-term memory. As noted in a Pinecone tutorial, “by default, LLMs are stateless — meaning each incoming query is processed independently of other interactions.”

This is one area where LangChain aims to help developers, by adding components like memory into the process of dealing with LLMs. Indeed, in JavaScript and TypeScript, LangChain has two methods related to memory: loadMemoryVariables and saveContext. According to the documentation, the first method “is used to retrieve data from memory (optionally using the current input values), and the second method is used to store data in memory.”

Another form of agent that Chase talked about is Auto-GPT, a software program that allows you to configure and deploy autonomous AI agents.

“One of the things that Auto-GPT introduced is this idea of long-term memory between the agent and tools interactions — and using a retriever vector store for that,” he said, referring to vector databases.

The New LAMP Stack?

Clearly, there’s a lot of figuring out yet to do when it comes to building applications with LLMs. In its Build keynotes, Microsoft classified LangChain as part of the “orchestration” layer in its “Copilot technology stack” for developers. In Microsoft’s system, orchestration includes prompt engineering and what it calls “metaprompts.”

Microsoft has its own tool, Semantic Kernel, that does a similar thing to LangChain. It also announced a new tool called Prompt Flow, which Microsoft CTO Kevin Scott said was “another orchestration mechanism that actually unifies LangChain and Semantic Kernel.”

It’s also worth noting the word “chain” in LangChain’s name, which indicates that it can interoperate with other tools — not just various LLMs, but other dev frameworks too. In May, Cloudflare announced LangChain support for its Workers framework.

There’s even been a new acronym coined involving LangChain: OPL, which stands for OpenAI, Pinecone, and LangChain. The inspiration for that was likely the LAMP stack (Linux, Apache, MySQL, PHP/Perl/Python), which was a key part of the 1990s and led to the emergence of Web 2.0. Who knows if OPL will stick as a term — and of course, its components aren’t all open source — but regardless, it’s a good indication that LangChain is already an important part of many developers’ personal stacks.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Deno.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.