Modal Title
Data / Data Science / Software Development

Dealing with Distributed Data When Training AI Models

Distributed data can be a challenge to training artificial intelligence models. Federated learning can change the paradigm.
Oct 18th, 2022 8:21am by
Featued image for: Dealing with Distributed Data When Training AI Models
Feature image via Pexels

Federated learning is a tool that lets your phone autocorrect your spelling without uploading the entirety of your text. It can also be applied to training artificial intelligence at the data’s source, without pooling it into one location.

The approach could bring breakthroughs in regulated industries such as health care and finance (where legal issues can mean data can’t be moved across jurisdictions), as well as to the Internet of Things, where data is dispersed by nature.

“Some of the most meaningful use cases that you and I would hope could come into the world, and developers want to bring into the world, are blocked because the data can’t move,” said Steve Irvine, founder and CEO of integrate.ai, an AI company. “Instead of data having to come to a central location to train the machine learning model, versions of the model gets sent out to the location where the data resides.”

The New Stack spoke with Irvine to learn how integrate.ai deploys federated learning to train AI models without pooling data.

How It Works

Integrate.ai offers a software developer kit that creates a preset Docker container with libraries and other tools needed to train the model locally. The developer or data custodian can determine what data to include or not to include in the container. integrate.ai then takes the model and federates it, letting it train on the selected data nodes.

“If you were to set up a brand new network, it’s as easy as a couple of lines of code,” Irvine said. “It creates a container, basically, in your environment, wherever it’s at and then you decide what data gets put in there and you decide what tasks you’re comfortable with that data participating in.”

The data itself doesn’t move. The model runs in the local environment, then communicates and coordinates across all the nodes to optimize the model as if it had trained on all the data in the same local repository. It then updates the parameters back to the global model, he explained.

“It creates a seamless experience on the back end — the researcher would never know the difference, other than they’re not getting on a plane and waiting a year to run one model when they could get it done in a couple of minutes,” Irvine said.

All of those actions behind the scenes — setting up the network, training the model, averaging the model, and ensuring that it’s all done in a privacy-safe way — is controllable through APIs, which are installed and used through the SDK.

After the training is done, the Docker container is torn down. The tool is largely Python-based, but it supports what Irvine called “custom models.”

“Most data scientists would use something like a Jupyter notebook, which is where they are actually building the models. We would just show up as commands within the Jupyter Notebooks,” he said. “So you can just federate your models in the exact same spot that you’re building them; you can train them in that same environment.”

This makes for a seamless experience for developers and data scientists, he claimed.

The Open Source Options

There are open source tools that can be deployed for federated learning. Nvidia offers Nvidia Flare, an open source framework to do federated learning, and Google Federate allows the building of federated models, he said. Alibaba also launched a federated framework this year.

“There are a lot of big open source ways to do experimentation, but they generally focus on allowing data scientists to be able to experiment and simulate,” Irvine said. “If you’re a developer who’s responsible for now having this in your product, that’s the gap that currently exists in the marketplace, because those open source systems require you to be able to basically own it and manage it.”

Irvine contended this is where integrate.ai differentiates, by being like Stripe for the industry and simplifying the integration work.

Use Cases for Federated Learning

Federated learning opens up a lot of opportunities for regulated industries, such as health care and financial services, he said. One use case would be a multinational organization that may not, for regulatory reasons, be able to move data to one location. Other use cases might include collaboration between organizations, where it might not be feasible or efficient to move the data, such as scanning breast cancer x-rays from different hospitals in different countries, Irvine said.

“Right now, you can’t centralize that data; there’s a lot of regulations that don’t allow you to move any of that data across jurisdictions,” he said. “Your application’s just not going to work as well as it could, even though all of those customers would be glad to collaborate because they just want the detection to be better, they want it to be more accurate.”

Other potential use cases include training models using data that resides at the edge or on Internet of Things devices, Irvine said. It also makes sense because it reduces cost, privacy and security concerns, he added.

”When everything in the world is a computer, and a data-capture and a decision-making device, we’re just going to have to have different network infrastructure behind that to support making a network.”

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.