TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Edge Computing

Tutorial: Configure NVIDIA Jetson Nano as an AI Testbed

In this tutorial, we will walk you through the steps involved in configuring Jetson Nano as an artificial intelligence testbed for inference. You will learn how to install, configure, and use TensorFlow, OpenCV, and TensorRT at the edge.
Jul 12th, 2019 11:14am by
Featued image for: Tutorial: Configure NVIDIA Jetson Nano as an AI Testbed
Feature image by RitaE from Pixabay.

In the last part of this tutorial series on the NVIDIA Jetson Nano development kit, I provided an overview of this powerful edge computing device. In the current installment, I will walk through the steps involved in configuring Jetson Nano as an artificial intelligence testbed for inference. You will learn how to install, configure, and use TensorFlow, OpenCV, and TensorRT at the edge.

Recommended Accessories for Jetson Nano

To get the best out of the device, you need an external power supply with a 5V 4A rating which is connected to the power barrel jack. The default MicroUSB is just not enough to drive the GPU and attached peripherals like a USB camera.

To force the board to draw power from an external adapter, you got to place a jumper on J48 which is located next to the camera interface on the board.

It is highly recommended that you use a 32GB micro SD card with Jetson Nano. This will be sufficient to mount the swap drive, downloading the required software and models.

Finally, use a compatible USB webcam for optimal performance. I use the Logitech 270 webcam but there are other models with higher resolution that may work with Nano.

Prepare the SD Card

Download and flash the micro SD card with the latest JetPack SDK for Nano from NVIDIA. This contains the OS and the essential runtime components like GPU drivers, CUDA toolkit, cuDNN library, TensorRT libraries and other dependencies.

You may want to use BalenaEtcher to flash the image to the SD card.

First Boot and Configuration

After you boot up the device with the SD card and configure Ubuntu 18.04, we need to do two things — add swap memory and maximize the clock speed of the processor.

Run the below script to add a 2GB swap file.


Next, we will lock Jetson Nano at its maximum frequency and power mode by running the following commands:

Install Deep Learning Frameworks and Libraries

Now is the time to install TensorFlow, Keras, NumPy, Jupyter, Matplotlib, and Pillow. Let’s start with the dependencies first.


We will now point the default Python executable to Python3. Since we are going to install most of the binaries within the home directory (~/.local/bin), we will add that to the path variable.


Let’s install PIP to manage Python modules.


It’s time to go ahead and install the modules. Note that we are using an optimized build of TensorFlow officially available from NVIDIA. Other modules such as Keras and Matplotlib are the standard builds from the community.

Since we are using the –user switch with pip, all the Python modules are installed locally within the home directory of the user. This keeps the configuration clean and simple.


Verify that the modules are installed successfully by importing them in Python.


Install the JetCam Python Module

JetCam is an official open-source library from NVIDIA which is an easy to use Python camera interface for Jetson. It works with a variety of USB and CSI cameras through Jetson’s Accelerated GStreamer Plugins. What I like about JetCam is the simple API that integrates with Jupyter Notebook for visualizing camera feeds.

This module will come handy for our future walkthroughs in the Jetson Nano series.

Build and Link OpenCV4

OpenCV acts as an imaging runtime for capturing, processing, and manipulating images and videos. Though JetPack comes with OpenCV, it is not optimized for the GPU and doesn’t exploit the acceleration capabilities.

We will build OpenCV from the source which will be highly optimized for Jetson Nano.

Let’s use a handy BASH script from NVIDIA to build and link OpenCV4.


Point the PYTHONPATH variable to OpenCV installation directory.


Verify the installation of OpenCV by loading the module.


Install the Inferencing Engine on Jetson Nano

Finally, we will clone the official inference engine repo and build samples on the device. These samples are useful in learning TensorRT — an inferencing runtime for C++ and Python.



Feel free to explore the samples. To run the classification demo, navigate to the ~/jetson-inference/build/aarch64/bin folder and run the below commands.


 

In the upcoming tutorials in this series, I plan to cover the topics of converting TensorFlow and PyTorch models to TensorRT, native inferencing with TensorRT, on-device transfer learning at the edge and more. Stay tuned.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.