Stopping AI Hallucinations for Enterprise Is Key for Vectara

There is a rush of companies trying to bring large language models (LLM) and generative AI to the enterprise, and Vectara is one of them. To find out how Vectara is pitching its product to enterprise customers, and how it is trying to solve AI hallucinations, I spoke with founder/CEO Amr Awadallah.
Awadallah has a track history of success as a tech founder, having co-founded Cloudera in 2008 and seeing it become a public company in 2017. More recently, he was a VP at Google, before launching Vectara at the start of 2022.
How to Solve Hallucinations
Although LLMs have proven to be very successful at deductive reasoning, there’s also a lot of concern in the tech community about its tendency to “hallucinate” facts. ChatGPT and similar services on the web do not access the web in real-time when they come up with answers to human prompts, so in some cases the reasoning goes haywire and they simply make stuff up. I asked Awadallah for his thoughts on this issue.
“Humans hallucinate as well, right,” he quipped. He went on to say that the solution to this problem in organizations is to have fact-checkers, and he thinks this is needed in the age of AI too.
“So that’s exactly how we are solving that problem at Vectara,” he said. “Our goal is to enable ChatGPT for your own business data.”
According to Awadallah, there are three ways you can add ChatGPT-like functionality to your business. The first is by fine-tuning LLMs, by continuing to train it with your own data.
“That does make it more capable [of] speaking about your data,” he said, “but it does not prevent hallucination. It will still make up stuff.”
He added that this method is also expensive and slow. Which leads to the second way to ChatGPT-ize your business data: prompt engineering. This approach involves figuring out “how can we, in the prompts, try and provide some of the additional elements that constrain the large language model from hallucinating too much,” he said.
Prompt engineering can reduce hallucination, he continued, but “it doesn’t increase the model’s awareness of your own content.”
The third approach, which is the one that Vectara uses, is “Retrieval-Augmented-Generation.” According to Awadallah, this is the approach advocated by AI pioneer Andrew Ng.
This approach involves the use of two neural networks. The first, a retrieval engine, is “focused on retrieving the most relevant facts that can address the prompt or the query or the question that is coming into the system,” he explained. Once you have these facts, you create a new prompt for the second neural network, instructing it to only respond using the data in those facts. This second LLM is a summarization engine.
So it’s about refining the information you get from the AI systems, until you have an answer that closely corresponds with the business-specific data it has been fed.
Even with this approach, there’s a chance the LLM may hallucinate something. So there is a final step that Vectara does. The output from the second neural network is fact-checked, said Awadallah, “to see how close it is to the original facts.”
He credits Microsoft with coming up with the term “grounding,” which he said nicely describes the above process. “You’re grounding the large language model by constraining it to facts,” he said.
Types of LLMs Used
I asked whether Vectara is using its own proprietary LLMs, third party LLMs (like OpenAI or Google), or open source?
The retrieval engine, he replied, is their own proprietary one. “We built that model — that’s our specialty, that’s our core, that’s our essence.”
The summarization model is mT5, an open source model created elsewhere. “We are not tied to any given model,” he said, regarding summarization. “We pick the best model that we can find to do what we need to do.”
In addition to the main two LLMs, Vectara also uses a “cross attentional re-ranker” model — which it developed. This, said Awadallah, re-sorts the data from the retrieval model, “so that the most relevant things are higher up in the list of facts, and the less relevant things are lower down.” Then it’s put through the summarizer model.
How Vectara Differs from Cohere and OpenAI
When I spoke to Cohere, which is also in the business of bringing ChatGPT-like functionality to the enterprise, they had talked about customers adding their own data to Cohere’s base models, and using reinforcement learning on those custom models.
Vectara does it differently. Awadallah called it the “IKEA developer model,” which he described as “very prescriptive” for the customer. By contrast, he characterized the Cohere and OpenAI approaches as the “Home Depot developer model,” where customers are given the toolset and create the solution from those tools.
“This is the prescription for how you’re going to get this done,” he said, about Vectara’s way of doing generative AI. “And a very simple API that does the job for them.”
The iPhone Moment for AI
Lastly, given Awadallah’s experience with Cloudera and then Google, I asked whether he sees the current AI landscape as being similar to cloud computing in its early days — in other words, are generative AI and LLMs about to change everything (again) for enterprise IT? It’s looking like AI technology will be embedded in all enterprise IT systems in the near future, I added.
“Absolutely,” he replied. “I believe [that] in five years, every single application — whether that be on the consumer side or in the business/enterprise side — will be re-architected in a way that is a lot more human in nature, in terms of how we express what we are trying to achieve and what we’re trying to do.”
By “human in nature,” he means that we will converse with applications. He thinks this is a seachange in user experience comparable to the touchscreen interface of the iPhone when it was first introduced.
Of course, it’s all very well talking with AI, but the real challenge is to stop it from telling lies.