Tutorial: Real-Time Object Detection with DeepStream on Nvidia Jetson AGX Orin

Last month, Nvidia unleashed the next-generation edge computing hardware device branded as Jetson AGX Orin at GTC. Courtesy of Nvidia, I was fortunate enough to get a Jetson AGX Orin Developer Kit to evaluate and experiment with it.
The Jetson AGX Orin Developer Kit has everything you need to run AI inference at the edge with ultra-low latency and high throughput. As a successor to the most powerful Jetson AGX Xavier, AGX Orin packs a punch.
Below are the specifications of the Jetson AGX Orin compute module:
The developer kit comes with a carrier board that makes it easy to connect various peripherals.
The Jetson AGX Orin Developer Kit comes with a preview of JetPack SDK 5.0, which is based on the Ubuntu 20.04 root filesystem and Linux Kernel 5.10. It comes preloaded with CUDA 11.4, cuDNN 8.3.2, TensorRT 8.4.0, and DeepStream 6.0.
This tutorial will walk you through the steps involved in performing real-time object detection with DeepStream SDK running on Jetson AGX Orin.
Step 1 – Install TensorFlow on JetPack 5.0
Since we use a pre-trained TensorFlow model, let’s get the runtime installed.
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v50 'tensorflow<2'
This installs TensorFlow 1.15 which works best for this tutorial. The --extra-index-url
points to a wheel optimized for JetPack 5.0.
Check if the GPU is accessible from TensorFlow.
Step 2 – Download Pre-trained TensorFlow Inception Model
We will use the TensorFlow Inception V2 model trained on the COCO dataset.
1 |
wget -qO- http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz | tar xvz -C /tmp |
Step 3 – Converting the TensorFlow Model to TensorRT
This step optimizes the model by converting the TensorFlow frozen model to a serialized TensorRT UFF MetaGraph model.
1 2 3 |
python3 /usr/lib/python3.8/dist-packages/uff/bin/convert_to_uff.py \ /tmp/ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb -O NMS \ -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \ -o /tmp/sample_ssd_relu6.uff |
Step 4 – Compiling the Custom Object Detector Sample Application
DeepStream SDK comes with a sample application that can be integrated with various models. We will now compile the application to use our Inception V2 model.
1 2 3 4 |
cd /opt/nvidia/deepstream/deepstream/sources/objectDetector_SSD export CUDA_VER=11.4 export LD_LIBRARY_PATH=/usr/local/cuda sudo -E make -C nvdsinfer_custom_impl_ssd |
Step 5 – Edit DeepStream Application Configuration File
Open /opt/nvidia/deepstream/deepstream/sources/objectDetector_SSD/deepstream_app_config_ssd.txt
and replace the [source0]
section with the below contents:
1 2 3 4 5 6 7 8 |
[source0] enable=1 type=1 camera-width=640 camera-height=480 camera-fps-n=30 camera-fps-d=1 camera-v4l2-dev-node=0 |
This essentially configures the USB webcam as the input source. Once you connect the camera, make sure it’s visible to JetPack.
1 |
ls -la /dev/video* |
Step 6 – Run the DeepStream Inference Pipeline
Finally, run the inference to perform object detection.
1 |
deepstream-app -c deepstream_app_config_ssd_USB.txt |
We can increase the inference performance and the number of frames per second by switching to the inbuilt Camera Serial Interface (CSI) of the Jetson AGX Orin Developer Kit.