New AI Dev Platform Allows You to Customize Open Source LLMs
Gradient is the latest large language model (LLM) application development platform to launch this year. This one caught my attention for a couple of reasons: firstly, it’s founded by people who used to run AI teams at Netflix, Google and Splunk; and secondly, Gradient promises to help developers build custom AI systems based on open source LLMs.
The latter is an indication that the LLM ecosystem is starting to mature — open source generally means expansion of a software market — and becoming less reliant on proprietary systems from the likes of OpenAI and Google.
Chang explained that Gradient’s developer platform enables users to train open source LLMs on a company’s own data sets, including being able to run inference and fine-tune the models. So far, only three open source LLMs are supported — Llama-2, Bloom-560, and Nous-Hermes-Llama-2 — but the company says “we are actively working to host more open-source models for you to fine-tune.”
Gradient calls its approach “mixture of experts,” or “multi-model systems,” according to Chang. Note that he said multi-model, not multi-modal — the latter involves different forms of media (such as images and videos), whereas multi-model simply means using more than one LLM. Chang added that “all of this is done via an API interface,” so it’s “very developer focused, developer friendly, and really intended to simplify the infrastructure that’s required.”
The Development Tools
There are two ways that developers can use Gradient. The first is via the company’s web interface — basically a software-as-a-service platform. Alternatively, devs can use Gradient’s APIs through its SDK (software development kit).
For the web UI, Chang said that you’d typically “upload your training dataset, select the model that you’re trying to train with, and then […] kick off the fine tuning job.”
Chang says that the web UI is designed for “users who are not as developer savvy — maybe an analyst or data scientist who’s more used to massaging the data, versus developing on platforms.”
For professional developers, he continued, “the SDK and CLI allow you to programmatically develop on these models as well.”
The final noteworthy thing for developers is that Gradient hosts embeddings, although fine-tuning them is an enterprise-only feature for now.
“We power the embeddings,” said Chang. “So not only do we have the LLM hosted on our service that you can fine-tune, we have embeddings as well. Fine-tuning for embeddings is currently an enterprise-only feature, but we’re working to port that over to our self-service product as well.”
It’s Not All Open: Gradient’s Industry-Specific LLMs
Although open source models are emphasized on its website, Gradient has also developed its own, proprietary, LLMs. They are industry-specific LLMs, for healthcare and financial, and are available on its enterprise cloud platform.
In the healthcare market, Gradient recently launched Nightingale, described as “an LLM with a medical degree.” As well as being trained on medical data, Gradient also claims that it “understands best practices in logical reasoning” (presumably to help it make diagnoses). In the financial domain, Gradient has released Albatross, an LLM that “has been trained on investment, regulatory, educational, and proprietary financial records data.”
For the healthcare solution, Chang said there are a few main use cases so far. Firstly, medical-specific, structured data processing — for example, to organize a patient’s medical and billing history. Secondly, what he labeled “user-facing experiences,” which includes chatbots. The third main use case is knowledge management; “building a repository for the administrative teams to be able to access institutional knowledge,” as Chang put it.
Challenges for Enterprises with AI Development
I asked what are the paint points that Gradient’s initial enterprise customers are seeing when it comes to developing applications using LLMs.
“I think the most interesting area is the fundamental misconceptions around fine-tuning, retrieval-augmented generation [RAG], prompt engineering,” Chang replied. “When we talk about customizing an AI solution, it’s not really one or the other — it’s a synthesis in utilization of all three.”
He says a lot of times, people try to rely on only one of those three things.
“There’s a lot of choice, there’s a lot of ability to shoot yourself in the foot in many ways with these different techniques,” he said. “So we try to provide a little bit more guidance there.”
He also noted that it’s still “incredibly complex” to do those three tasks on an LLM. Setting up the infrastructure for fine-tuning and figuring out how to make it run with existing frameworks can be very challenging, he cautioned.
Gradient is targeting developers with its product and so it offers to handle all the backend complexity. Chang just wants devs to use its API. “Call it, and we make sure that it runs reliably every single time,” he said.
What Skills Do AI Engineers Need?
Lastly, I wanted to get Chang’s view on the emerging role of the “AI engineer” in IT departments. Does he think this is a trending role?
“So our perspective is that where the AI engineer comes from is a whole spectrum of technical individuals,” he said. “And so what we’ve interacted with are analysts turning into AI engineers, data engineers turning into AI engineers, product engineers turning into AI engineers, ML engineers (of course) turning into AI engineers.”
According to Chang, the role of an AI engineer doesn’t require a specialist set of knowledge.
“The problem spaces that you’re thinking about now are reasoning about the actual business use case, reasoning about your data sources, and finding ways to critically analyze what happens when you’re training the model, or prompting the model, or installing the AI into a service.”
Chang thinks those are all generalized skill sets, which explains why so many technical roles are transitioning into what’s being called AI engineering now.
“In our opinion, the world is going to move more towards this ‘anyone can be an AI engineer’ concept — and our product is designed to enable that.”