How Ray, a Distributed AI Framework, Helps Power ChatGPT

According to Ion Stoica, co-founder of Databricks and Anyscale, and also a senior professor of computer science at Berkeley, 2023 will be the year of “distributed AI frameworks.” Needless to say, he has already had a hand in creating such a tool, in the form of Anyscale’s open source Ray platform. Among other uses, Ray helps power OpenAI’s groundbreaking ChatGPT.
I interviewed Stoica to find out what Ray does exactly and, more generally, what is needed to scale AI software in this new era of generative AI. We also discuss the latest in “sky computing,” a term Stoica and his Berkeley team introduced in 2021, in a paper that proposed a new form of cloud computing based around interoperability and distributed computing.
What Is Ray?
According to Stoica, Ray is a “distributed computing ecosystem as a service,” which for the past couple of years has been focused on “supporting machine learning workloads.” He says it began development in 2016 as a class project at Berkeley, with the goal of achieving “distributed training” (meaning training of data for machine learning). Berkeley was also where Apache Spark, a data processing engine, was built. But Stoica said they quickly learned that Spark wasn’t the best fit for deep learning workloads.
“Spark was very good for data processing and for classical machine learning,” he explained. “But at that time [2016] […] deep learning was surging and deep learning needed GPUs. Without going into much detail of Spark, since it is Java-based it doesn’t support GPUs very well.”
As the product development continued, the Berkeley team — which included Stoica’s Anyscale co-founders, and his grad students Robert Nishihara and Philipp Moritz — got more ambitious. After distributed training, they added support for reinforcement learning.
“Reinforcement learning is a pretty complex beast, so to speak,” said Stoica, “because it requires you to do many things — it requires you to train an agent, to interact with the simulator or the environment, to get the state of the environment, and then to make decisions based on that. […] And then many of these reinforcement learning applications also use simulators, like games or manufacturing simulators. And so you have to run the simulation, and all of this at scale.”
One of the first use cases for Ray was helping Team New Zealand retain the America’s Cup, the most prestigious yachting prize in the world. Similar to Formula One with cars, champions in the America’s Cup rely on state-of-the-art technology to prevail. To win in 2022, Team NZ used Anyscale’s RLlib, a reinforcement learning Python framework built on Ray, to run sailing simulations “around the clock.”

Graphic via Anyscale
On its website, Anyscale positions Ray as the “simplest path to scaling Python.” According to Stoica, Ray is “like an extension of Python” and — like Python — there are a set of Ray libraries that are targeted towards different use cases. The awkwardly named RLlib is for reinforcement learning, but there are similar libraries for training, serving, data pre-processing, and more.
Why OpenAI Is Using Ray
In its case studies paper, Anyscale lists companies like Uber, Shopify and Instacart as users of Ray. But of course, the most interesting use case currently is how OpenAI is using it for ChatGPT. I asked Stoica for more details about that.
“I wish I knew a lot more… OpenAI is very secretive,” he chuckled. However, he did put into context the reason why OpenAI relies on Ray’s distributed scaling technology. If you plot compute demands for training state-of-the-art machine learning models, he said, that graph is “growing at least 10 times every 18 months.” This growth rate has been happening since 2010, he added.
If that formula sounds familiar, you’ll remember that Moore’s Law states that the number of transistors in a dense integrated circuit (IC) doubles about every two years. Stoica is saying that ML training requirements is increasing tenfold every 18 months, which implies that personal computers alone aren’t powerful enough to meet demand for training ML models.
“Moore’s law is slowing down,” said Stoica, “and so you’ll see this growing gap between the demand of these machine learning workloads and the capabilities of a single node or single processor. And it’s obvious that the only way you can support these workloads, eventually, is to distribute these workloads.”
He thinks that accelerators, like GPUs, will help close the gap a little bit, “but they are not going to solve the problem.” Likewise, he says that the issue isn’t just about compute power, but the ability of computer memory to deal with ML loads.
Managing the Data and Using with Kubernetes
As a way to help deal with these ML demands, Ray helps orchestrate the process of ingesting and processing data.
“It’s a very generic, easy-to-use, Python-native, distributed compute platform,” said Stoica, adding that it serves as a “substrate to do training, data ingestion, reprocessing — all of these things.”

Image via Anyscale, via Google; click here to see full view.
This ability to manage a complex compute process sounds a little bit like what Kubernetes does for cloud computing infrastructure (i.e. orchestrate it at scale in order to deploy applications).
“Ray, it’s one level above,” Stoica said in response, referring to the computing stack. “Because Ray is for the programmer. […] it’s also doing some management of the resources and so forth, but it’s on top of Kubernetes.”
He noted that Google recently built a machine learning platform on GCP, using a combination of Ray, Kubernetes and Kubeflow.
Sky Computing Update
Finally, I asked Stoica for an update on our August 2021 conversation about “sky computing,” a term he and his Berkeley colleagues termed to refer to a proposed new era of interoperable cloud computing. In November, his Berkeley lab announced SkyPilot, an open source “intercloud broker for Sky Computing,” as a first step towards this vision.
“Given a job and its resource requirements (CPU/GPU/TPU),” Berkeley’s Zongheng Yang explained, “SkyPilot automatically figures out which locations (zone/region/cloud) have the compute to run the job, then sends it to the cheapest one to execute.

Graphic via Berkeley
SkyPilot operates just above the cloud computing layer, so (as with Kubernetes) there is no direct relation to what Ray is doing. However, it’s interesting that the early use cases for SkyPilot are to run ML training on the cloud. So it seems that SkyPilot is being positioned as a complementary piece to Ray in a modern ML-reliant tech stack.
Solving Increasing Compute Demands
If there is a throughput to Ion Stoica’s work — from the massive data companies he helps build, to the sky computing work he does with his students — it’s that he wants to find solutions for what he terms “the compute demands of this world.” With ML becoming increasingly important to enterprises and to society in general, compute will need to be distributed. Ray is the platform for that, says Stoica (Anyscale also runs a managed service version of Ray). As for his sky computing concept, that’s about distributing the load — and cost — of cloud computing, a layer below Ray.
Finally, Stoica expects to see a lot of more open source ML models come out, since many businesses will be uncomfortable relying on one company for ML — such as OpenAI, especially now that Microsoft is about to own 49% of it. More ML models will, of course, raise the demand for distributed compute solutions. But don’t worry, Ion Stoica has you covered.