Cloud Services / Machine Learning / Serverless

Tutorial: Host a Serverless ML Inference API with AWS Lambda and Amazon EFS

5 Nov 2020 12:42pm, by

In this tutorial, I will walk you through the steps involved in hosting a PyTorch model on AWS Lambda backed by an Amazon EFS file system. The function is exposed through an API Gateway. Assuming you followed the steps mentioned in the previous tutorial, you should have the EFS file system ready with PyTorch modules and the ResNet model. We will attach that to a Lambda function to host the inference API.

This article is a part of the series on making AWS Lambda functions stateful with Amazon EFS (Part 1Part 2, Part 3).

Prerequisite: IAM Role for AWS Lambda Function

Before we go ahead with the Lambda function, we need to have an IAM role in place. This role will give enough permissions to the Lambda function to access the EFS file system and Elastic Network Interface creation within the VPC.

Choose the Lambda use case to create a role.

Add AWSLambdaVPCAccessExecutionRole and AmazonElasticFileSystemClientFullAccess policies to the role and save it.

Since the Lambda function is running in the context of a VPC, you need to configure a NAT Gateway to provide outbound connectivity. Refer to the documentation for the details.

Create the Lambda Function

Create a Lambda function with Python 3.8 runtime and the execution role set to the IAM role created in the previous step.

From the VPC section of the Lambda function, select the same VPC that was used during the creation of the EFS file system. Add the default security group which allows inbound and outbound traffic within the VPC.

Under the file system section, choose the same filesystem that was used in the previous part of the tutorial. Type /mnt/ml for the local mount path.

Edit the basic settings section to increase the RAM and timeout settings. Increase the memory to 1024 MB and timeout to 10 minutes.

Add the PYTHONPATH and MODEL_DIR environment variables to point the function to the EFS location. This will ensure that the Lambda function can access the PyTorch libraries, trained model, and the label file. Don’t miss the trailing backslash as it is required by the code to access the directories.

Paste the below code snippet into the function code section and hit the deploy button. The same is available on GitHub as well.

Create a test event with the below configuration and trigger the function.

Testing the function should result in the below output.

As you can see, the model is able to correctly classify the image of the dog.

Attach an API Gateway

It’s time to expose the function through an API Gateway. Add a trigger to the function with the following settings:

Send a cURL request to the inference API by sending the URL of the image as querystring parameter.

The first invocation will take longer due to cold start. But subsequent calls will be faster.

Try the service with the dog and flower images hosted at the below URLs:

https://i.postimg.cc/v8pmjrwf/dog.jpg

https://i.postimg.cc/1RN54Y1n/flower.jpg

Congratulations! You have successfully hosted a PyTorch model in AWS Lambda to deliver serverless machine learning API.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Amazon Web Services is a sponsor of The New Stack.

Feature by Roman Kraft on Unsplash.

A newsletter digest of the week’s most important stories & analyses.