Cloud Services / Data / Machine Learning

An Introduction to Google Vertex AI AutoML: Training and Inference

21 Jun 2021 1:30pm, by

This post is the second in a two-part series exploring Google’s newly-launched Vertex AI, a unified machine learning and deep learning platform. This post delves into the training and inference process. Read the previous installment, on data preparation, here.

Google’s Vertex AI is a unified machine learning and deep learning platform from that supports AutoML models and custom models. In this tutorial, we will train an image classification model to detect face masks with Vertex AI AutoML.

To complete this tutorial, you need an active Google Cloud subscription and Google Cloud SDK installed on your workstation.

There are three steps involved in training this model — dataset creation, training, and inference.

Refer to the previous part of the tutorial to complete the dataset creation step. This tutorial will focus on the training and inference of the model.

With the dataset in place, let’s start by clicking on the train new model button:

In the next step, choose AutoML and click on continue:

Give the model a name and leave the defaults for the data split:

Let’s provide eight hours of node budget for the training. Make sure to enable early stopping to finish training when the model accuracy reaches a satisfactory level:

Finally, start the training and wait for an email sent after the completion of the job:

The model is now trained and available in the Vertex AI dashboard:

Feel free to explore the attributes such as precision, recall, and confusion matrix:

Let’s go ahead and deploy the model to test the accuracy. Under the deploy and test section, click on deploy to endpoint:

Configure the endpoint by giving it a name, 100% of traffic split and one compute node.

Wait for the endpoint to become ready to test the model by uploading an image:

Play with the model by uploading images of faces with and without masks:

Let’s now use cURL to invoke the model endpoint from the command line.

Create a JSON object to hold the image data. It should be a base64-encoded string.

Put the base-64 encoded string in a file called input.json populating the content element with the string.

Set environment variables for endpoint id, project id, region, and the input JSON file.

Generate the cURL request and evaluate the response.

I got the below response from a sample image of U.S. President Joe Biden wearing a mask:

As you can see, the model correctly predicted the image with a face mask with 82% confidence:

This model can now be used with any application that can invoke a REST endpoint.

In one of the upcoming tutorials, we will create a Vertex AI custom model to train a convolutional neural network to detect face masks. Stay tuned.

Feature Image par Julius Silver de Pixabay

A newsletter digest of the week’s most important stories & analyses.