Tutorial: Configure NVIDIA Jetson Nano as an AI Testbed
In the last part of this tutorial series on the NVIDIA Jetson Nano development kit, I provided an overview of this powerful edge computing device. In the current installment, I will walk through the steps involved in configuring Jetson Nano as an artificial intelligence testbed for inference. You will learn how to install, configure, and use TensorFlow, OpenCV, and TensorRT at the edge.
Recommended Accessories for Jetson Nano
To get the best out of the device, you need an external power supply with a 5V 4A rating which is connected to the power barrel jack. The default MicroUSB is just not enough to drive the GPU and attached peripherals like a USB camera.
To force the board to draw power from an external adapter, you got to place a jumper on J48 which is located next to the camera interface on the board.
It is highly recommended that you use a 32GB micro SD card with Jetson Nano. This will be sufficient to mount the swap drive, downloading the required software and models.
Finally, use a compatible USB webcam for optimal performance. I use the Logitech 270 webcam but there are other models with higher resolution that may work with Nano.
Prepare the SD Card
Download and flash the micro SD card with the latest JetPack SDK for Nano from NVIDIA. This contains the OS and the essential runtime components like GPU drivers, CUDA toolkit, cuDNN library, TensorRT libraries and other dependencies.
You may want to use BalenaEtcher to flash the image to the SD card.
First Boot and Configuration
After you boot up the device with the SD card and configure Ubuntu 18.04, we need to do two things — add swap memory and maximize the clock speed of the processor.
Run the below script to add a 2GB swap file.
1 2 3 4 5 6 7 |
sudo fallocate -l 2G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile sudo swapon --show sudo cp /etc/fstab /etc/fstab.bak echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab |
Next, we will lock Jetson Nano at its maximum frequency and power mode by running the following commands:
1 2 |
sudo jetson_clocks sudo nvpmodel -m 0 |
Install Deep Learning Frameworks and Libraries
Now is the time to install TensorFlow, Keras, NumPy, Jupyter, Matplotlib, and Pillow. Let’s start with the dependencies first.
1 2 3 4 5 6 7 8 |
sudo apt install -y git \ cmake \ libatlas-base-dev \ gfortran \ python3-dev \ python3-pip \ libhdf5-serial-dev \ hdf5-tools |
We will now point the default Python executable to Python3. Since we are going to install most of the binaries within the home directory (~/.local/bin), we will add that to the path variable.
1 2 3 4 |
echo "export PATH=$PATH:/$HOME/.local/bin" >> .bashrc echo "alias python=python3" >> .bashrc echo "alias pip=pip3" >> .bashrc source .bashrc |
Let’s install PIP to manage Python modules.
1 2 3 4 |
cd ~ wget https://bootstrap.pypa.io/get-pip.py sudo python get-pip.py rm get-pip.py |
It’s time to go ahead and install the modules. Note that we are using an optimized build of TensorFlow officially available from NVIDIA. Other modules such as Keras and Matplotlib are the standard builds from the community.
Since we are using the –user switch with pip, all the Python modules are installed locally within the home directory of the user. This keeps the configuration clean and simple.
1 2 3 4 5 6 7 |
pip install -U pip setuptools --user pip install --user numpy pip install --user --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3 pip install --user keras pip install --user jupyter pip install --user pillow pip install --user matplotlib |
Verify that the modules are installed successfully by importing them in Python.
1 2 3 4 5 6 |
python -c 'import numpy; print(numpy.__version__)' python -c 'import tensorflow; print(tensorflow.__version__)' python -c 'import keras; print(keras.__version__)' python -c 'import jupyter; print(jupyter.__version__)' python -c 'import PIL; print(PIL.__version__)' python -c 'import matplotlib; print(matplotlib.__version__)' |
Install the JetCam Python Module
JetCam is an official open-source library from NVIDIA which is an easy to use Python camera interface for Jetson. It works with a variety of USB and CSI cameras through Jetson’s Accelerated GStreamer Plugins. What I like about JetCam is the simple API that integrates with Jupyter Notebook for visualizing camera feeds.
This module will come handy for our future walkthroughs in the Jetson Nano series.
1 2 3 |
git clone https://github.com/NVIDIA-AI-IOT/jetcam cd jetcam pip install ./ --user |
Build and Link OpenCV4
OpenCV acts as an imaging runtime for capturing, processing, and manipulating images and videos. Though JetPack comes with OpenCV, it is not optimized for the GPU and doesn’t exploit the acceleration capabilities.
We will build OpenCV from the source which will be highly optimized for Jetson Nano.
Let’s use a handy BASH script from NVIDIA to build and link OpenCV4.
1 2 |
wget https://raw.githubusercontent.com/AastaNV/JEP/master/script/install_opencv4.0.0_Nano.sh bash install_opencv4.0.0_Nano.sh $HOME/.local |
Point the PYTHONPATH variable to OpenCV installation directory.
1 2 3 |
export PYTHONPATH="$PYTHONPATH:/usr/local/python/cv2/python-3.6/" echo "export PYTHONPATH=$PYTHONPATH:/usr/local/python/cv2/python-3.6/" >> ~/.bashrc source ~/.bashrc |
Verify the installation of OpenCV by loading the module.
1 |
python -c 'import cv2; print(cv2.__version__)' |
Install the Inferencing Engine on Jetson Nano
Finally, we will clone the official inference engine repo and build samples on the device. These samples are useful in learning TensorRT — an inferencing runtime for C++ and Python.
1 2 3 4 5 |
sudo apt-get install git cmake cd ~ git clone https://github.com/dusty-nv/jetson-inference cd jetson-inference git submodule update --init |
1 2 3 4 5 |
cd ~/jetson-inference mkdir build cd build make sudo make install |
Feel free to explore the samples. To run the classification demo, navigate to the ~/jetson-inference/build/aarch64/bin folder and run the below commands.
1 2 |
cd ~/jetson-inference/build/aarch64/bin ./imagenet-console --network=googlenet orange_0.jpg output_0.jpg |
In the upcoming tutorials in this series, I plan to cover the topics of converting TensorFlow and PyTorch models to TensorRT, native inferencing with TensorRT, on-device transfer learning at the edge and more. Stay tuned.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.