What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
Super-fast S3 Express storage.
New Graviton 4 processor instances.
Emily Freeman leaving AWS.
I don't use AWS, so none of this will affect me.
AI / Operations

Google I/O Showcases the Machine Learning Strengths of TensorFlow

May 19th, 2017 4:00am by
Featued image for: Google I/O Showcases the Machine Learning Strengths of TensorFlow

With Google heavily betting that its machine learning and analytics capabilities will be the big differentiator for its Google Cloud Platform, it’s no surprise that the company spent a lot of time at this year’s Google I/O  event demonstrating its various machine learning and artificial intelligence tools, including an updated TensorFlow.

While Google offers BigQuery and other big data analysis tools, it is the TensorFlow machine learning modeling library that remains Google’s unique value proposition in the space. While many other AI and machine learning efforts focus on the server, TensorFlow is focused not only on the server side but also on the edge devices.

Earlier this week, Google launched the Google Cloud Platform Internet of Things Core, a suite of tools and analytics for dealing with a large number of IoT devices and the data they generate. TensorFlow is also a part of this platform strategy, with the Google’s IoT strategy focusing heavily on the ability to push TensorFlow models out to edge devices. This enables individual devices to perform actions such as face recognition, speech recognition, and image recognition, without having to connect to the core application.

Google, internally, has brought TensorFlow to bear on over 1,000 internal projects, from Gmail to Google Photos. Outside of Google, the TensorFlow repository on GitHub is the most popular open source ML project on that service.

To continue to forward momentum of the project, Google I/O played host to a number of TensorFlow demonstrations, including version two of TensorFlow Processor Units (TPUs), which are application-specific integrated circuits customized for vector calculations. Andy Swing, senior staff hardware engineer at Google, said that the TPU version can handle both training and execution of a TensorFlow model.

TPU 2 was designed specifically to aid in the training of TensorFlow models. Unlike TPU version 1, said Swing, TPU 2 was designed from the ground up to specifically speed up the training time for a TensorFlow model.

“The original version 1’s are still used a ton, but they were made very specifically for one task: inference. This version is targeted at training. It scales to 64 units together, which would only be good for training because there are no models that big today,” said Swing.

The first TPU was first developed when a Google engineer realized that if everyone with an Android phone use speech recognition just three minutes a day, then Google would have to double its number of data centers.

The TPU 2 delivers 180TFLOPS of floating-point performance. It has 64GB of high-bandwidth memory, and custom networking capabilities to bring multiple TPUs to bear. The Google team is currently soliciting developers for machine learning projects to run on their cluster of 1,000 devices.

Elsewhere at the show, Google’s Magenta team was demonstrating its work with TensorFlow. Magenta is a sub-team inside the Google Brain project who’ve been tasked with combining machine learning and art to produce interesting experiences. At Google I/O, this took the form of AI-Duets.

Kory Matheson, one of the Magenta team developers on AI-Duets, said that the project was an attempt to create interesting interactions with an AI on the musical keyboard. As you play the piano, AI-Duets responds with its own notes, extemporaneously performing a duet of sorts:

To train the application, Matheson said, the TensorFlow project analyzed a massive archive of MIDI files and fake book cheat sheets for popular songs. The model is fairly deep and large, said Matheson, and is hosted online with a Web-based interface.

One topic that came up repeatedly around TensorFlow at Google I/O was the higher level neural network API known as Keras. This deep learning library was repeatedly mentioned by Google developers, and in fact, this open source deep-learning library is the second most popular machine learning project on GitHub. Despite the fact that Keras comes from outside of Google, the team has embraced the project and supported it.

For Google I/O, a release candidate of version 1.2 of TensorFlow was released. Changes in this version include many big fixes, the availability of the C library on Windows, and a new call to run similar steps over and over with less overhead.

Feature image: Google CEO Sundar Pichai extolling the virtues of AI at Google I/O 2017 (Alex Handy).

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.