Lightning AI Teases App Development Platform — an ‘OS for AI’
Will Falcon has quite a resume. He’s the creator of PyTorch Lightning, a wrapper for PyTorch (the popular machine learning framework), and the CEO of a VC-funded “OS for AI” company called Lightning AI. And, as I learned today in an interview with Falcon, he’s an ex-Navy Seal trainee who only learned how to code ten years ago.
Although he’s currently best known for the open source PyTorch Lightning, which has nearly 25,000 stars on GitHub, his company Lightning AI has ambitions to become a leading AI development platform. Among its features is Lightning Apps, which is described as a “framework to build composable, reactive ML workflows” using Python. Customers can also create full AI applications. The platform is currently only available to enterprises, but will shortly be made available to all developers.
Where Pytorch Lightning Fits
Before we get to the company’s platform, I first wanted to clarify the relationship between Pytorch (an open source machine learning framework originally developed by Meta) and Pytorch Lightning.
“Pytorch is a framework in Python for building models, but they’ll only give you the blocks, right? It’s like a car [and] they just give you a bunch of pieces. Lightning […] pre-assembles the cars for you and then you can tweak the cars if you want, right.”
Since I’d mentioned ChatGPT at the beginning of our conversation, he added for good measure that OpenAI is “like a Ferrari.”
I’m not sure that analogy totally works either, but regardless, point taken: Pytorch is an ML framework that gives you the pieces to build an ML app, Pytorch Lightning does some of the “pre-assembly” but you still have to build the app, and ChatGPT is a high-end ML app.
Lightning AI’s Platform
Lightning AI is an enterprise platform for building ML models, explained Falcon. Currently, this AI platform is not open to independent developers, but he said it will become generally available “in a few months.”
He went on to describe what their customers typically do on the Lightning AI platform.
“You would do your R&D, you would develop your models, you would try ideas out, you would train the models, you would deploy the models. So it’s like the operating system of how you do that.”
So the core of the platform is model development, or as Falcon put it, the “operations around model development.”
I asked what are the use cases for its current customers. “We have social media companies that are using us to train image models,” he replied. “So they do news feed recommendations […] they’re like at the scale of an LLM, but it’s for images.”
He also mentioned companies using its platform to train video or multimodal models. Still, other customers are using it to train LLMs — for example, pharmaceutical companies that are training LLMs for drug discovery.
While model development and training is the primary functionality of Lightning AI, Falcon told me that its users can also create applications on its platform by selecting third-party AI tools (I was given some names of common AI development tools, but after the interview, I was asked not to reveal them, since the platform isn’t yet public).
He showed me some demo examples but noted that most apps currently built in Lightning AI are internal apps for companies. Hence, it was time for another analogy.
“So you can think about a Lightning App as like a recipe,” he said, with the ingredients being third-party AI tools. “I can install the ingredients as I need them […] we’re more like an operating system.”
Open Source AI
Falcon recently posted on X/Twitter, “As a community, we must continue to advocate for AI to remain open source.” That was in response to a tweet by Yann LeCun, Chief AI Scientist at Meta, who wrote that “AI systems are fast becoming a basic infrastructure” and that “historically, basic infrastructure always ends up being open source.”
Meta, of course, has been leading the way with open sourcing large language models (LLMs) — most recently with the open source release of Llama 2. I asked Falcon whether he thinks other leading companies in the LLM space, such as OpenAI and Google, will also open source their models in future?
“Eventually things do open up,” he replied. “There’s already a lot of precedent for this. […] So IBM had the mainframe, right? And it was something that only they could do. Imagine if […] personal computing hadn’t come and you always had to go to IBM for a computer. Like, that’s crazy, right? So, no fundamental technology will ever just be owned by a single company. That doesn’t happen. So whether they want to or not, it will be open source — it will be available to more people, right? Just because it’s just how it is. So, I think that that’s probably the closest analogy that I can think of.”
He concedes that he doesn’t know if OpenAI and Google will open up, but he said they will probably have “versions [of LLMs] that are private.” This time, he used the analogy of Windows (proprietary) versus Linux (open source).
The Windows and Linux analogy makes more sense to me than the IBM mainframe and PC one. Over time, open source LLMs will very likely become more powerful and more plentiful, and so they will eventually rival proprietary LLMs like OpenAI’s latest GPT model. Llama 2 is arguably already very close to the quality of GPT-4 (Anyscale claimed in August that Llama 2 “is about as factually accurate as GPT-4 for summaries”).
Speaking of opening up… Since Lightning AI hasn’t yet opened its developer platform into General Availability, it’s hard to tell how good it will be for application development — especially compared to proven developer platforms that offer AI functionality, like Vercel’s. Will Falcon talks a big game with his OS analogy, but very few software products ever become as fundamental as an operating system. But let’s reserve judgment until Lightning AI’s platform is made available to all.
For now, there’s plenty to ponder in terms of what developers might do with open source LLMs — especially as more developer platforms for AI apps become available.