TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Operations

A Closer Look at Microsoft Vision AI Kit

In this tutorial, I will walk you through the steps involved in configuring and connecting Microsoft Vision AI Dev Kit.
Sep 20th, 2019 11:14am by
Featued image for: A Closer Look at Microsoft Vision AI Kit

Last week, Microsoft and Qualcomm jointly released the Vision AI Kit, smart camera powered by Qualcomm’s AI accelerator chip and Microsoft’s Azure IoT Edge platform.

The Vision AI Dev Kit is a great device to learn how convolutional neural networks work. It is also a good starting point to explore the concepts of Azure IoT Edge. This developer kit is very similar to AWS DeepLens. Both the devices are configured as edge devices with the camera acting as an image processor. Both devices run AI models to perform inference at the edge and send the output back to the cloud.

For the hardware specification and software configuration details, refer to my Forbes article. I also covered the architecture of Azure IoT Edge in one of my previous articles at The New Stack.

When compared to AWS DeepLens, Microsoft Vision AI Kit comes across as a sophisticated and polished device. The developer experience is much better than that of AWS DeepLens.

In this tutorial, I will walk you through the steps involved in configuring and connecting Microsoft Vision AI Dev Kit.

First Run: Connecting the Dev Kit to the Cloud

As soon as the device is unboxed and switched on, it broadcasts a Wi-Fi SSID which typically starts with MSIoT_XXXX. Connect your development machine to the SSID. Don’t get tempted to use your mobile to configure it. You need a full-blown browser to finish the setup.

After connecting to the device Wi-Fi, visit http://setupaicamera.ms on your machine (Note: Link only works after shows up only after connecting to the camera WiFi).

You can set a password for the Wi-Fi and also configure SSH access by adding a username and password.

In the next step, you connect the device to the internet via your existing Wi-Fi network.

In the next step, you will be shown a token to get authenticated with Azure. Copy the token and click next.

Complete the authentication process with Azure.

In the next two steps, we configure Azure resources associated with the camera.

Provide a name that’s used to register the camera with Azure IoT Hub and click next.

The device will start downloading the modules required by Azure IoT Edge platform.

After a few minutes, we get a link to the camera through which we can access the webstream. This indicates that the setup has been successful.

Visiting the above URL shows the camera feed in the browser window. Try placing things before the camera to perform object detection. The below screenshot shows the camera detecting a car.

Congratulations! You are all set to deploy custom AI models to the camera.

I SSHed into the camera to explore the platform further. The device runs a custom flavor of Yocto Linux distribution.

To run Azure IoT Edge modules, Microsoft and Qualcomm ported Docker Engine to Yocto. The container runtime is compatible with Docker API 1.40.

The device runs 4 containers to deliver out of the box experience. The container, visionsamplemodule:1.1.3-arm32v7, runs MobileNet SSD V1 while visionsamplemodule:webstream_0.0.13-arm32v7 runs a Node.js based WebSocket application to stream the camera feed.

The remaining two containers are a part of the core Azure IoT Edge runtime.

We can also list the modules by running iotedge list command.

If you exec into the AIVisionDevKitGetStartedModule, you can see the optimized version of MobileNet SSD file in the models directory. This file is responsible for performing the inference on the inbound camera feed.  Azure Portal also shows the same set of modules running in the device.

In an upcoming article, I will demonstrate how to deploy models trained using CustomVision.AI AutoML service. Stay tuned!

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.