TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
AI / Frontend Development / Large Language Models

Pivot! AI Devs Move to Switch LLMs, Reduce OpenAI Dependency

AI engineers and AI companies are looking to reduce — or even remove entirely — their dependency on OpenAI's API after the recent drama.
Nov 22nd, 2023 7:14am by
Featued image for: Pivot! AI Devs Move to Switch LLMs, Reduce OpenAI Dependency
Image via TBS on YouTube.

Regardless of the outcome of the drama around OpenAI over the past several days, one thing is clear: startups that had built on top of OpenAI’s API are now rethinking their strategy. As Shawn “swyx” Wang noted soon after the drama spilled into the news, “99% of AI Engineer work begins and probably ends with OpenAI models.” But now, warns Wang, the “days of OpenAI hegemony are over.”

The expectation is that OpenAI competitors, such as Anthropic and Google, will benefit; as will open source LLMs such as Meta’s Llama 2. But the upheaval will also filter through to third-party tools. For instance, Swyx thinks “there will be relatively more value in model-agnostic tooling like LangChain and LlamaIndex, as well as model routers and gateways.”

Pros and Cons of Using OpenAI

Ultimately, the biggest lesson is a familiar one: don’t let your work project or startup rely on another company’s technology. It’s something that Twitter developers learned the hard way as far back as 2012 (and then relearned a decade later).

Up till last week, the prevailing opinion amongst AI engineers was that OpenAI’s LLMs were superior to all the other LLMs. There has been talk this year that open source models are catching up. Meta’s LLama 2, announced in July, currently tops Stanford’s HELM (Holistic Evaluation of Language Models) benchmarking leaderboard. However, OpenAI’s latest models (GPT-4 onwards) have not been evaluated by HELM — and the feeling is that GPT is still the best.

The OpenAI developer experience is also hard to beat, mostly because you don’t have to train or fine-tune the LLM yourself. You simply use OpenAI’s API and then do prompt engineering on it, with the help of tools like LangChain.

Overall, using OpenAI’s API has been viewed as the most cost-efficient and simple approach to AI engineering. However, the antics of the past several days have starkly illustrated the risks of relying on one company’s API. So perhaps many AI startups will now decide that having direct access to the LLMs — especially if they are open source — is the better option.

Evaluating the Alternatives

Already non-OpenAI vendors are stepping forward to help startups test alternative approaches. Robert Nishihara from AnyScale recently wrote on X (formally Twitter),

“If you want to compare OpenAI side by side with open models (Llama 2, Mistral, Zephyr, …), check out Anyscale Endpoints. We provide an OpenAI-compatible API (for inference and fine-tuning).”

Even if startups decide to continue using OpenAI’s market-leading (for now) GPT models, they may decide to serve them from OpenAI’s much, much more stable partner: Microsoft. Soups Ranjan, founder and CEO of an AI startup called Sardine, commented on X that “many companies probably have moved their model serving directly to Microsoft’s Azure AI APIs.” Indeed, Ranjan confirmed that his company has done exactly that.

Ranjan also suggested that AI startups should diversify their LLMs, by “orchestrating across multiple models — like Google’s PaLM, Anthropic’s Claude2 or the open source model Llama.”

However, Ranjan warned that open source isn’t the easy option and that you’ll need a solid backend to make it work. “Don’t underestimate the super power of OpenAI or Azure or Google Cloud — they have world-class serving infra that can host these large language models that require huge amounts of RAM or custom GPU chips like Nvidia A100s or H100s, which are in acute shortage,” he wrote on X.

It might be worth it in the end, though. “Control your models, control your destiny,” concluded Ranjan.

Executing the AI Pivot

Over on LinkedIn, AI entrepreneur Aishwarya (AG) Goel wrote a guide to transitioning your startup off OpenAI and onto open source tooling from Hugging Face. She outlined how to find models on the platform, test them using Hugging Face’s Inference API, do a cost analysis, and consider “serverless deployment options” (which her own company offers).

But beware, changing your LLM provider has hidden dangers. LangChain, a key partner of OpenAI and probably the most-used AI engineering tool other than OpenAI itself, wrote in a tweet that “different LLMs often require different prompting strategies.”

“Switching the API endpoint is often the easy part,” LangChain added. “The hard part is getting one LLM to behave similarly as another. It’s hard enough to get a single LLM to perform well!”

The company said there are “no amazing options” to do this, but it recommended developers use its own LangSmith Prompt Hub to test out “examples of prompts that work well for the model you are working with.”

Time to Branch out

At the time of writing (early Wednesday morning PT), the latest news from the OpenAI drama is that Sam Altman will return as CEO and not join Microsoft after all. If that does indeed happen, many AI engineers and AI startups will breathe a sigh of relief. But they shouldn’t forget the underlying lesson here: don’t depend on one company to run your product!

One engineer who has begun to test out alternatives to OpenAI is Salvatore Sanfilippo (a.k.a. @antirez), the creator of Redis. After admitting in a tweet that he loves ChatGPT, he wrote that “if you are using the OpenAI API for your product and you didn’t try if the task can be handled by fine-tuned (by LoRa or other means) Mistral 7B…Well in this case you are really missing something.”

If the creator of Redis is looking at other options — in his case a new open source LLM called Mistral 7B — then perhaps you should too.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.