Cloud Services / Edge / IoT / Machine Learning

How AWS Panorama Accelerates Computer Vision at the Edge

11 Dec 2020 9:34am, by

At AWS re:Invent 2017, Amazon launched AWS Deep Lens, a smart camera powered by Intel Graphics Engine and AWS services such as GreenGrass, SageMaker, IoT Core, and Lambda. As an AI enthusiast, I was fascinated by the design and architecture of DeepLens. As soon as I got back home from Vegas, I dissected the architecture and published an in-depth analysis at The New Stack.

Three years later, at re:Invent 2020, Amazon announced AWS Panorama, an industry-grade, enterprise-ready version of AWS DeepLens. It is built to enable customers to accelerate the deployment of computer vision-based applications. AWS Panorama can turn any existing Open Network Video Interface Forum (ONVIF) compatible IP camera into an intelligent, smart camera.

AWS Panorama has its roots in DeepLens, which was designed as a prototype to demonstrate how AI models trained in the cloud can be deployed at the edge. Amazon has worked closely with customers, original device manufacturers (ODM), silicon manufacturers designing AI accelerators, and systems integrators to deliver an end-to-end solution.

AWS Panorama has two components — a plug-n-play appliance and an SDK to build and deploy applications. Let’s take a closer look at this platform.

AWS Panorama Appliance

The AWS Panorama Appliance is a ruggedized hardware device that acts as the hub for multiple IP cameras. It turns any camera that supports the ONVIF standard into a smart camera. Since AWS Panorama Appliance is an IP62 rated device, which is dustproof and water-resistant, it can be deployed in harsh environments.

The AWS Panorama Appliance comes with an HDMI port for connecting to a video monitor to see inference output and two Gigabit Ethernet ports. The device is about a half-rack unit wide, which means two units can be accommodated in a single server rack.

Though the appliance is slated to be released next year, the AWS Panorama Appliance Developer Kit will help customers get started with the platform. The hardware specifications of the developer kit closely resemble the final version of the appliance.

Since computer vision at edge needs hardware acceleration, AWS Panorama Appliance is built on NVIDIA Jetson Xavier system-on-module (SOM). With 512-core Volta GPU with Tensor Cores, 8-core ARM CPU, 32GB RAM, Jetson Xavier is the most powerful edge hardware platform available in the market. Models trained in Tensorflow, PyTorch, and MXNet are converted into NVIDIA’s TensorRT format, optimized for inference. Amazon made the best decision of building its computer vision appliance based on NVIDIA Jetson Xavier. It delivers the power and performance needed to process multiple video streams concurrently while delivering fast inference.

The final version of the appliance may also support the Ambarella CV 2x product line for AI acceleration. These SOMs are designed for accurate 3D environmental modeling and real-time neural network performance. The CV2x family of chips is used in advanced robotics and industrial applications, including autonomous robots.CVFlow is the software layer that optimizes neural networks for inference deployed on CV 2x chips.

To get started, you need to connect The AWS Panorama Appliance to the Internet and configure it through the AWS Console. A wizard walks through the steps and finally generates a tarball with the certificates and registration metadata. As soon as a USB drive populated with the tarball is plugged in, the appliance will automatically finish the registration process. Behind the scenes, a local GreenGrass instance talks to the IoT Core to register the appliance as an IoT device. An IAM role is assigned to the device that gives sufficient permissions to access Lambda functions and other AWS services such as CloudWatch and CloudTrail.

Amazon is including multiple pre-trained ML models for PPE detection, estimating the retail queue length, and crowd counting. Additional models trained or registered with SageMaker can be easily pushed to the appliance.

For building applications and integrating them with ML models, developers need AWS Panorama SDK.

AWS Panorama SDK

The AWS Panorama SDK is the platform’s software component to develop, test, and deploy applications on the appliance. It supports both NVIDIA Jetson Xavier and Ambarella CV 2 SOM.

Amazon is making the SDK available only to the closed community of device manufacturers and hardware partners. It is not open to the broader AWS developer community.

Through the SDK, developers can push applications to a fleet of Panorama Appliances deployed at the edge. A trained model is stored in an S3 bucket, which is integrated with a Lambda function. Through the AWS Console, developers can package the combination of the model and Lambda function deployed and run at the edge.

Amazon has published samples and machine learning models for AWS Panorama on GitHub that come with Jupyter Notebook and Lambda functions.

Summary and Key Takeaways

AWS ML, IoT, and edge platforms have matured since the announcement of DeepLens in 2017. AWS Panorama platform is based on the tight integration of these services and emerging use cases of computer vision such as social distancing and mandatory mask usage.

AWS Panorama is also one of the first industrial-grade computer vision platforms to come from a public cloud provider. It is an excellent choice for customers from the manufacturing, retail, healthcare, and logistics verticals looking for a plug-and-play computer vision platform.

Personally, I am a bit disappointed that AWS is not opening up the SDK to developers. There is no compatibility between DeepLens and AWS Panorama. AWS should have shipped a new SDK that upgrades DeepLens to bring it at par with Panorama Appliance. It could be restricted to just one USB camera with no support for ONVIF.

Amazon Web Services is a sponsor of The New Stack.

A newsletter digest of the week’s most important stories & analyses.