AI-Aided Coding
On average, how much time do you think you save per week by an AI-powered coding assistant such as GitHub Copilot or JetBrains AI Assistant?
I don’t use an AI coding assistant.
Up to 1 hour/week
1-3 hours
3-5 hours
5-8 hours
More than 8 hours
I don’t save any time, yet!
Edge Computing

Tutorial: Perform Object Detection at the Edge with AWS IoT Greengrass

May 24th, 2019 3:00am by
Featued image for: Tutorial: Perform Object Detection at the Edge with AWS IoT Greengrass

Amazon Web Services’ IoT Greengrass is an edge computing platform from Amazon. For an overview of this emerging technology, refer to my previous article.

In this second part of the series on AWS IoT Greengrass, I will walk you through the scenario where we identify the vehicle type through machine learning inferencing and change the color of a smart bulb. These devices are managed and controlled by AWS IoT Greengrass.

This demo scenario is based on the below workflow:

  • A smart camera (Horned Sungem) that detects objects including vehicle types
  • A smart bulb (Philips Hue) that changes the color based on the vehicle type
  • Greengrass Core running on a Raspberry Pi 3 Model B+
  • The smart camera and bulb are connected to separate Raspberry Pi Zero W devices
  • A Lambda function to control the color of the bulb
  • AWS IoT Greengrass Subscriptions to enable the flow of messages

Here is the list of components used for this project:

  • 1 X Raspberry Pi 3 Model B+
  • 2 X Raspberry Pi Zero W
  • 1 X Horned Sungem Camera
  • 1 X Philips Hue Bridge
  • 1 X Philips Huge Go Bulb

The source code for this tutorial is available at Github.

Configuring the Greengrass Group

First, we create a Greengrass Group that has one core and two devices. A Raspberry Pi 3 Model B+ is configured as the core that runs the Greengrass runtime. This device is responsible for providing the messaging infrastructure and local compute capabilities to the leaf-level devices which are the bulb and the camera.

After registering the core device, we update the config.json file located at /greengrass/config and restart the Raspberry Pi. The below screenshot shows the status of Greengrass daemon.

Within the Greengrass group, we register two devices — bulb and the camera.

The device identities will be used to configure the Raspberry Pi Zero W devices responsible for controlling the smart bulb and smart camera.

One of the Raspberry Pi Zero W is configured to talk to Philips Hue Go bulb via the bridge.

The other Raspberry Pi Zero W device is configured to talk to Horned Sungem camera that performs object detection.

All the devices are connected to the local WiFi router.

Connecting and Configuring the Horned Sungem Smart Camera

The Horned Sungem camera comes with an embedded Intel Movidius Vision Processing Unit (VPU) that accelerates the inference. The SDK provides easy access to a variety of pre-trained models for object detection, image classification, and face detection.

We connect the camera to a Raspberry Pi Zero W device that runs the Horned Sungem SDK. When an object is detected by the camera, it simply publishes a message to the MQTT topic called camera/infer.

The below code snippet shows how we treat the camera like a typical sensor that publishes telemetry to an AWS IoT MQTT topic.

The complete source code for the camera inference is available on Github in the file.

Connecting and Controlling Philips Hue Smart Bulb

Philips Hue uses a bridge for controlling the bulbs based on the Z-Wave protocol. Since the bridge exposes REST API, we can control it from any device that can make an HTTP call.

One of the Raspberry Pi devices runs the code to change the color of the bulb based on the message sent to a topic called bulb/color.

This device doesn’t know the actual publisher sending the message. Its job is to subscribe to the topic and change the color based on the published message.

The below code snippet shows the callback method that is invoked each time a message is published to the topic.

AWS Lambda: Connecting the Camera with the Bulb

In the current setup, the camera is publishing to the topic camera/infer and the bulb is waiting for messages on bulb/color. These two are independent topics that need to be connected through business logic. In our case, we want to turn the bulb blue when the detected vehicle type is a bus and green when the vehicle is a car.

This logic is written as a Lambda function and deployed to AWS IoT Greengrass.

The function is published as a normal Lambda function but gets pushed to the edge through the AWS Console.

Finally, we need to connect the dots between the devices and the Lambda function. This is done through the Greengrass Subscription.

The camera publishes to the AWS IoT topic, camera/infer that is received by the Lambda function. The function then publishes a message back to an AWS IoT topic called bulb/color.

This is defined as a set of subscriptions in AWS IoT Greengrass.

Once the entire configuration is done, we need to deploy it to the AWS IoT Greengrass Core.

AWS IoT Greengrass is a simple yet powerful platform to deploy applications at the edge. This scenario highlights how to perform object detection based on a smart camera managed by AWS IoT Greengrass.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Unit.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.