How AI21’s New Tool Reduces LLM Hallucinations

No one really knows why artificial intelligence that use large language models hallucinates — but they do.
This flaw has led some organizations to ban the use of AI. A majority of leaders expressed concerns about the risk related to functional accuracy, according to a KPMG survey of 225 executives at U.S. companies with revenues in excess of $1 billion annually. The survey found that 90% had “moderate to highly significant” concerns about the risks of using generative AI and doubts about how to mitigate those risks, Forbes reported in April. Sixty percent also said they were probably still two years away from their first generative AI solution.
AI21 Labs is hoping to mitigate those concerns with a new engine called Contextual Answers. Released Wednesday, the solution significantly reduces hallucinations, according to Tal Delbari, who led the AI21 team that created the tool.
“We don’t really understand all the the internal mechanisms of these large language models. It’s almost like magic,” Delbari said. “But what we do know is that when we train these models — and it’s true for AI21 Labs’ large language models, OpenAI and Entropik and all of these players — the main part of the model training is not about making sure that the answers or the outputs of the models are correct.”
It’s a language model, not a knowledge model, as ethicist, author and philosophy professor Reid Blackman said at Rev4. It’s trained to predict the next word in a sentence, Delbari explained, so it will try to generate a sentence that a looks right structurally and grammatically — but the AI doesn’t understand the concept of factuality.
“If a customer asked about the return policy of a specific website, the company wants the model to generate an answer that is grounded, truthful and correct based on the specific policy and not based on the general case,” Delbari said. “This is the reason that we started this technology.”
How AI21’s Engine Handles Hallucinations
Contextual Answers deals with hallucinations in two ways: First, it operates on Jurassic II, AI21’s large language model. Delbari’s group trained a specific variant of Jurassic II on business domains such as finance, medicine, insurance, pharmaceuticals and retail.
“We train the model with triplets of documents, questions about the documents and answers that came specifically from the documents, and this is part of the technique that we use to make sure that the model learns that in this specific task, it should only retrieve information from a single document or a library of documents,” he said.
Some organizations have attempted a similar process with models already generally trained on the internet, he said, but that approach is flawed. For one thing, when a model is trained with new information, it tends to forget what it had learned previously, he explained. Organizations using open source projects such as LangChain also have tried similar approaches, he said. These approaches “are not great in fighting hallucinations and still the organization needs to invest in AI practitioners, NLP (natural language processing) experts, engineers,” Delbari added. “With our solution, it’s just plug and play. You don’t need to do any engineering work to implement this architecture.”
Another key difference between Context Answers and internet-trained large language models like GPT is that in many cases, the context window for inputting information is 8,000 to 32,000 tokens, which roughy equates to the same amount of words. That can be a problem for organizations, which may have a single document of 50 pages or more. Contextual Answers supports any size document length and any number of documents, Delbari said.
With Contextual Answers, organizations train the model on their own documents library via AI21’s website or an API, he said.
A Backup Plan
Second, AI21 added filters and rails to detect hallucinations and remove them or ask the main model to generate an output until there are no hallucinations.
“From a single document or from millions of internal documents, whatever corpus of information you load to the model, we put an entire architecture of models that are making sure that the model doesn’t go off [the] rails,” he said.
It is still possible to trigger hallucinations, but a user has to work pretty hard to do so, he added.
“Our models are much more reliable, truthful, to the real data, and it’s very rare that they’re answering something that is is not true,” Delbari said. When companies see that hallucinations are almost zero, it emboldens them to bring these technologies into production, he added.
There are two ways organizations can implement Contextual Answers: It can run as a SaaS on AWS, or it can run on the organization’s own virtual private cloud. The latter ensures the data will never leave the virtual walls of the organization, he said.
Coding and Contextual Answers
But what about coding? It can be asked about coding, but at this time the solution is not optimized to help with writing code at this time. The roadmap does include a plan to train it so that it will be able to write code, Delbari said.
For those interested, there is a tutorial published by AI21 Labs partner Lab Lab AI on how to create Contextual Answer apps. It was written for a hackathon when it was still in beta and had limited the document size of submissions.