Modal Title
Machine Learning

Intel OpenVINO Brings AI Inferencing to the Desktop and Edge

A look at Intel's OpenVino AI Inferencing toolkit for the desktop.
Oct 4th, 2019 10:18am by
Featued image for: Intel OpenVINO Brings AI Inferencing to the Desktop and Edge

The cloud may be the best environment to train deep learning models but inferencing is always done in servers, desktops, mobiles, and edge devices. Artificial intelligence (AI)-infused applications depend on the combination of hardware and software to accelerate the inferencing of deep learning models trained in the cloud.

Intel wants developers to rely on a standard, unified platform when dealing with AI inferencing irrespective of the environment. The OpenVINO Toolkit is a platform designed to accelerate AI inferencing on PCs, Macs, servers, and embedded devices. It also supports a variety of hardware accelerators the come in the form of a high-end CPU, discrete GPU, Vision Processing Unit (VPU), and Field Programmable Gate Array (FPGA). Developers benefit from the abstraction provided by the OpenVINO Toolkit’s pluggable architecture.

OpenVINO stands for Open Visual Inference and Neural Network Optimization. Initially launched in 2018, the toolkit has become popular among developers and enterprises building next-generation computer vision-based applications.

OpenVINO Toolkit supports the following hardware:

  • Intel CPU
  • Intel Integrated Graphics
  • Intel FPGA
  • Intel Movidius Neural Compute Stick and Neural Compute Stick 2
  • Intel Vision Accelerator Design based on Myriad VPU

The software platform includes essential components that accelerate inferencing.

Deep Learning Model Optimizer

Model Optimizer is a cross-platform command-line tool that enables the transition of deep learning models from training to the deployment environment. It performs static model analysis to tune deep learning models for optimal execution on target devices.

Developers can bring a fully-trained deep learning model and optimize it for inferencing through the Model Optimizer.

 

The Model Optimizer supports a variety of mainstream deep learning frameworks including TensorFlow, Caffe2, Apache MXNet, Kaldi, and ONNX. Independent of the framework used for training the model, the optimizer always produces two files — an XML file that describes the network topology, and a BIN file with the weights and biases binary data.

Inference Engine

Developers using the OpenVINO Toolkit load the IR files generated by the Model Optimizer into the Inference Engine (IE) plugin specific to the target hardware. The IE plugin is responsible for utilizing the accelerator by offloading the IR file to the available hardware. For example, by changing one parameter of the IE plugin, developers will be able to switch the acceleration from an x86 CPU to Myriad X VPU.

The Inference Engine is a C++ library with a set of C++ classes to infer input data (images) and get a result. The C++ library provides an API to read the Intermediate Representation, set the input and output formats, and execute the model on devices.

The Inference Engine library is available as a binary on Linux (libinference_engine.so) and Windows (inference_engine.dll). The Python bindings for this library are available in preview.

Integrated OpenCV and OpenVX Environments

OpenCV is one of the most popular libraries for building computer vision-based applications. Starting with OpenCV 4.0, the library is fully integrated with Intel OpenVINO Toolkit. The DNN component of OpenCV can delegate the inferencing to one of the available accelerators. For example, by changing a couple of lines of code in standard OpenCV-based inferencing applications, developers will be able to load IR files and use them with Intel Myriad X VPU. Behind the scenes, OpenCV delegates the execution to the Inferencing Engine that performs acceleration.

The tight integration between OpenCV and OpenVINO opens up doors for traditional computer vision applications to take advantage of the latest AI accelerators.

OpenVX is a software development package for the development and optimization of computer vision and image processing pipelines for Intel System-on-Chips (SoCs). The toolkit offers a set of optimized primitives for low-level image processing and computer vision primitives.

Tools, Samples, and Model Zoo

The OpenVINO Toolkit comes with multiple tools and samples to help developers learn the workflow.

Though most of the samples are in C++, many of them can be easily ported to Python. Image classification, object detection, neural style transfer are some of the samples included in the toolkit.

The best thing that Intel has done for developers is the Model Zoo that has optimized models for the OpenVINO Toolkit. Developers can download the XML and BIN combination of files and directly use them in their code.

The Model Zoo has over a dozen fully-trained and optimized models for face detection, emotion detection, head pose estimation, text detection, text recognition, and many more.

Summary

Intel OpenVINO Toolkit helps the company in gaining and market share and mindshare for inferencing AI models. Tight integration with OpenCV, OpenVX, and ONNX brings the platform closer to deep learning developers. Independent software vendors and independent device vendors are going to embrace OpenVINO Toolkit as the standard runtime for AI inferencing.

Next week, I am going to walk you through the steps of installing and configuring OpenVINO Toolkit on Ubuntu. We will also explore the concepts of using OpenCV with OpenVINO Toolkit and Intel Myriad X VPU for accelerating object detection at the edge. Stay tuned.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Unit.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.