TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Services / Serverless / Storage

Turn AWS Lambda Functions Stateful with Amazon Elastic File System

This tutorial series covers all the aspects of using Amazon EFS with AWS Lambda to host the serverless machine learning API. Part 1: Get to know EFS.
Nov 2nd, 2020 10:55am by
Featued image for: Turn AWS Lambda Functions Stateful with Amazon Elastic File System
Feature image by moren hsu on Unsplash.

Amazon Web Services’ Lambda is one of the first serverless platforms in the industry. Since its launch in 2014, Amazon has added multiple features to make it the most mature Functions as a Service (FaaS). The platform supports various language runtimes, including Node.js, Python, Java, Ruby, C#, Go, and PowerShell. There is tight integration with mainstream AWS managed services that act as event sources to trigger Lambda functions.

Traditionally, serverless compute platforms and FaaS offerings such as AWS Lambda are associated with stateless functions. Since the functions are invoked and terminated based on events, there is no intrinsic persistence layer available. The state is always externalized by moving it to object storage, NoSQL database, in-memory database, or a relational database instance. It’s common to maintain state in Lambda functions by writing it to an object in an S3 bucket or a DynamoDB or RDS table.

But certain use cases such as machine learning inference demand a new approach. Downloading a large model from an Amazon S3 bucket increases the startup time, which results in latency. Some functions require external libraries that may be too large. Though AWS Lambda Layers’ concept addresses this problem, there is a limitation of 50MB (zipped, for direct upload), which defeats the purpose. Layers are static once they are deployed, which means the contents can be changed only by deploying a new layer.

In June 2020, AWS has added support for Amazon Elastic File System (EFS) for Lambda, enabling many exciting use cases.

This tutorial series covers all the aspects of using Amazon EFS with AWS Lambda to host the serverless machine learning API.

What Is Amazon EFS?

Amazon Elastic File System (EFS) provides a managed elastic NFS file system for AWS services and on-premises resources. It can scale to petabytes without disrupting applications, growing and shrinking automatically as the files or added and removed, eliminating the need to provision and manage capacity to accommodate growth.

Since EFS uses NFS v4, the industry standard for the shared file system, the file system can be easily attached to EC2 instances running Linux.

Amazon EFS exposes well-known access points that can be configured per application. EFS access points represent a flexible way to manage application access in NFS environments with increased scalability, security, and ease of use. One EFS file system can have multiple access points. Each access point can be configured with permissions associated with POSIX-compliant user id and group id. Combined with IAM, EFS file systems can have fine-grained security and access control.

It’s important to understand that Amazon EFS is available only within a VPC. Only those consumers within the same VPC can access the EFS file system. On-premises servers can mount EFS shares only after establishing connectivity through AWS Direct Connect or AWS VPN.

Accessing Amazon EFS Filesystem from AWS Lambda

When an EFS file system is attached to an AWS Lambda function, it can access existing data and store data in it. This approach makes it possible to populate the filesystem with the dependencies and additional files that become available to all the Lambda instances.

The prerequisite for an AWS Lambda to access an EFS file system is that the function should be in the same VPC as the EFS. It should also have explicit permission to access the file system and create Elastic Network Interfaces (ENI) for the subnets of the VPC. Once these conditions are met, a Lambda function can read and write to the EFS file system.

Populating Content in EFS through an Amazon EC2 Instance

The easiest way to populate the EFS file system accessed by a Lambda function is by mounting it to an EC2 instance. Using the standard NFS conventions, an EFS file system shows up in the /mnt directory.

When launching an EC2 instance from the AWS Console, it has an option to mount an existing EFS filesystem. The wizard automatically adds appropriate user script to permanently mount the file system by adding an entry in the /etc/fstab.

Use Case: Hosting Serverless ML Inference API on AWS Lambda

The powerful combination of EFS and Lambda functions can be used to host deep learning inference API in serverless mode. Since the size of a TensorFlow or PyTorch model may exceed the size limits of Lambda layers and /tmp directory, EFS comes in handy in storing the models.

The EFS storage backend for Lambda can also have all the dependencies such as OpenCV or PIL which are not only large but take time to install. A Lambda Python function can be pointed to an existing directory through the PYTHONPATH environment variable. The same file system will also have the fully trained model stored in a separate directory which is loaded by the function.

To populate the EFS file system with the Python modules and pre-trained model, we can use an Amazon EC2 instance or even a SageMaker Notebook. Both options give us the ability to mount the file system and add the dependencies either through a Python virtual environment or through the pip installer.

The below workflow highlights the steps involved in this approach:

  1. Create an EFS file system in an existing VPC
  2. Launch an EC2 instance in the same VPC and mount the EFS
  3. Populate the EFS file system with Python modules and a PyTorch model
  4. Create an AWS Lambda Python function in the same VPC
  5. Add IAM roles for accessing EFS and creating network interfaces in VPC
  6. Attach the same EFS file system used in the EC2 instance
  7. Add environment variables to point the Python runtime to existing modules in EFS
  8. Attach an API Gateway to expose the function as an HTTP API
  9. Configure VPC with a NAT Gateway to allow outbound traffic to the internet (Optional)
  10. Invoke the serverless API

This week The New Stack will feature a series of upcoming tutorials on this subject, where I will walk you through all the steps involved in hosting serverless machine learning inference API in AWS Lambda. Check in tomorrow for the next installment!

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.