Large Language Models: Open Source LLMs in 2023
While it’s becoming clear that such generative AI-based tools have plenty of lucrative potential, many smaller businesses and independent researchers in the wider AI community remain cautious about adopting closed source LLMs, due to their operational cost and heavy computational requirements, in addition to other concerns like data ownership, privacy and their unnerving tendency to sometimes “hallucinuate” false information.
So it’s little wonder that open source alternatives have also gained traction over the past year. As some surveys have pointed out, while open source LLMs are still generally not as powerful as their closed source cousins, open source options can be fine-tuned to outperform proprietary models on specific tasks.
As the AI field is becoming more diverse thanks to a wider variety of open source alternatives cropping up, here are some of the contenders making the biggest impact in 2023.
1. LLaMA and LLaMA 2
In February, Meta released the first version of LLaMA, its large language model that boasted 13 billion parameters, which was tested to outperform the 175-billion-parameter model GPT-3 on most benchmarks. This first version was released as an open source package that developers could request access to under a noncommercial license; however, the model and its weights were soon leaked online, making it effectively open for anyone to use.
In July, Meta followed up with the release of LLaMA 2, which the company says is trained on 40 percent more data than the original version, along with other fine-tuned versions like LLaMA 2-Chat, which is optimized for human-like conversations, as well as LLaMA Code, which is tailored for generating code.
While there is some dispute as to whether LLaMA 2 is truly open source, Meta has since opened up usage restrictions somewhat on these models to include commercial use as well, leading to open source LLaMA-based derivatives like Alpaca, Alpaca-LoRA, Koala, QLoRA, llama.cpp, Vicuna, Giraffe and StableBeluga being developed.
Released back in April by the nonprofit lab EleutherAI, Pythia is a suite of LLMs of varying sizes that are trained on public data. Pythia is intended as an interpretability tool for researchers looking to better understand the training process behind LLMs and the results they produce.
Launched by MosaicML starting in May, the series of MPT large language models started out with an initial 7-billion-parameter model, followed by a 30-billion-parameter version in June, which the company claims outperforms both LLaMA and Falcon, particularly in certain use cases where longer text prompts are required.
MPT incorporates some of the latest techniques in the evolving field of LLMs to boost efficiency, context length extrapolation and improved stability to reduce loss spikes.
This family of state-of-the-art language models was launched at the beginning of June by the Abu Dhabi-based Technology Innovation Institute, under the Apache 2.0 license. The first model of 40 billion parameters was an instant hit with developers and researchers in the field, owing to the fact that the model was released with weights.
In September, an even larger Falcon model with 180 billion parameters was announced, making it one of the largest open source LLMs available. The team behind Falcon maintains that while the 180-billion-parameter version lags slightly behind closed-source models like OpenAI’s GPT-4, it nevertheless surpasses Meta’s LLaMA 2 and stands shoulder to shoulder with Google’s PaLM 2 Large.
Another model making big waves is BLOOM (short for BigScience Large Open-science Open-access Multilingual Language Model). Though it was actually released in July 2022, it makes our list as it a model developed through the collaboration of over 1,000 AI researchers from 60 countries and 250 institutions under the coordination of Hugging Face and France’s GENCI (Grand Equipement National de Calcul Intensif) and IDRIS (Institute for Development and Resources in Intensive Scientific Computing).
Intended to facilitate public research on large language models, the largest of the BLOOM models boasts 178 billon parameters, and is trained on multilingual data derived from 46 human languages and 13 programming languages, making it the largest open source massively multilingual model thus far.
Founded by researchers formerly associated with Meta and Google, Mistral first released a 7-billion-parameter LLM in September. According to the Paris-based startup, Mistral 7B outperforms other open source LLMs like LLaMA 2 on many metrics. Just this month, the team released a newer model called Mixtral 8x7B via a torrent link — generating enough buzz that outshone the over-rehearsed publicity around releases from bigger tech companies.
With the field of open source LLMs continuing to expand, many developers are looking to reduce dependency on OpenAI’s API by pivoting to open source alternatives that are more cost effective, transparent and tuneable.
Proprietary models may still have a slight edge for now, but open source models are catching up quickly, with some open LLMs already outperforming their larger-parameter counterparts, showing that the quality of the training data can matter more than size. The past year has shown some very exciting developments with open LLMs, making it clear that they will continue to play an important role as the landscape for large language models evolves.