Modal Title
Machine Learning

Paperspace Co-Founders Discuss TPUs and Cloud Deep Learning

An interview with Paperspace CEO Dillon Erb and Chief Technology Officer Tom Sanfilippo about the Google TPUs for machine learning work.
Jul 10th, 2018 1:43pm by
Featued image for: Paperspace Co-Founders Discuss TPUs and Cloud Deep Learning


Paperspace Co-Founders Discuss TPUs and Cloud Deep Learning

It’s a crowded market if you’re a machine learning company. Every vendor under the sun has integrated some new-fangled AI-driven service, making the real ROI tough to spot in the jungle of buzzwords and feature creep. Paperspace is hoping to make that journey a little easier for businesses by offering Gradient, an easily manageable infrastructure platform for deep learning.

Under the hood, Paperspace is not just some AI startup: Its offering developers access to Google’s Tensorflow Processing Units (TPUs). Paperspace is currently offering TPU access as well as GPU access in its deep learning platform.

That alone was worth sitting down with CEO Dillon Erb and Chief Technology Officer Tom Sanfilippo for a chat. These co-founders took time to discuss just what it’s like to build deep learning applications with TPUs.

“People are talking about running 400 or 500 TPUs at the same time on a particular workload. To feed all of those TPUs at the same time you need an architecture for running a farm of virtual machines to feed them. You need to position all the data so the TPU can poll the data on demand so it never has an empty queue of work to do. And to set all that up is pretty complicated,” said Sanfilippo.

Erb said that deep learning has become an enticing offering for businesses, but pointed out that actually implementing a machine learning pipeline and producing a return on investment can be difficult for enterprises.

“The reality is making that operational and turning the newer developments coming out of the research and academic world into something that actually yields an ROI for a business is still very difficult. That difficulty exists on a number of dimensions. The one that we spend the most of our time on is around infrastructure management and automation,” said Erb.

The pair are hoping their deep learning platform can make it easier for high performance compute (HPC) developers to deal with their daily work, as well. Erb and Sanfilipo pointed out that the more academic world of HPC can be a bit slower moving than developers would like, and that their Jupyter Notebook-enabled management and automation service can speed up the adoption of GPU and TPU compute for the traditional research community.

In this Edition:

0:30: Are businesses seeing ROI from machine learning yet?
3:35: Why are machine learning and AI better enabled today than they were 10 years ago? Is it the hardware, the cloud, the data?
6:18: What’s it like to build applications with Tensorflow Processing Units?
8:34: Hardware architectures and their approaches to data
11:06: What’s in Gradient?
14:51: What do you think of the state of academic HPC?

Google is a sponsor of The New Stack.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.