Modal Title
Kubernetes

Kubernetes for Edge Computing: The Microsoft Azure Approach

May 14th, 2018 8:57am by
Featued image for: Kubernetes for Edge Computing: The Microsoft Azure Approach
Feature image via Pixabay.

Edge devices are designed to live at the edge of the network, often on a slow or occasional connection, and increasingly they’re used for much more than monitoring. Microsoft has created a managed service, called Azure IoT Edge, to put analytics and intelligence on those devices using containers. Think of it as a way to move cloud workloads like analytics, machine learning, SQL Server databases and even serverless functions onto devices that are centrally deployed and managed, even if they’re only occasionally connected.

As announced at the recent Build conference, those workloads now include Microsoft’s Cognitive Services; the first one available is Custom Vision, which uses the same ResNet-50 deep learning network that Microsoft Bing uses for image recognition, but with the final layer replaced by one trained against your specific images to do image classification and object recognition (finding and labeling the location of known objects in the image with a bounding box).

For object recognition, you can use the Custom Vision portal to find all the objects that have been recognized and tag them specifically (including adjusting the bounding boxes to delineate objects more precisely to help with the training). Upload as few as 50 labeled images per class as the training set and in less than a minute you have a trained model that you can deploy as a module to an Azure IoT Edge device.

Keep feeding the results of the image recognition back in (or add more training images) and you get a better model that you can update devices with.

What you get in the export is a ‘compact’ version of the model that runs on more constrained mobile devices and you need to select a compact domain to train the model in before exporting (currently the options are general, landmarks and retail, which isn’t quite as many domain options as if you call the model as an API). The tradeoff is that it may not be as accurate as the same data model running in the cloud where there are more resources, but it can run right where you need the recognition done, even if there’s no network connection.

The IoT Edge export from the Custom Vision service is a DockerFile, a TensorFlow model and service code that can run in a Windows or Linux container (on X64 or ARM). The Azure IoT Edge runtime is open source.

Courtesy of Microsoft.

If you want to create your own data models instead of using Cognitive Services, you can package those as Docker containers to deploy to IoT Edge in the same way. To simplify packaging up models built with Azure Machine Learning, use the AI Toolkit for Azure IoT Edge.

And now there are more ways to deploy those containers onto edge devices managed by IoT Edge than just through the Azure portal.

To make it easier to manage IoT Edge deployments from inside a Kubernetes environment like Azure Kubernetes Service (the new, less confusing name for the fully managed Kubernetes container orchestration service on Azure), using familiar Kubernetes concepts like manifests, Microsoft has just published an open source IoT Edge Virtual Kubelet provider on GitHub that you can install in your Kubernetes cluster using Helm.

“This gives you a consistent interface where you can push an entire Kubernetes workload across cloud and edge,” — Sam George.

As Sam George, director of Azure IoT engineering, explained to The New Stack at Build, “In a Kubernetes cluster, Kubernetes nodes are typically bound to a VM; if I push containers to the cluster, they get placed inside a VM. A Virtual Kubelet works like a proxy. It can interpret the Kubernetes commands and do something else with [the container] beside placing it in a VM.”

So far, Virtual Kubelets have been about bursting out from a Kubernetes cluster to a cloud Kubernetes service like Azure Container Instances or Hyper to get an advantage like scale or speed or per-second billing without having to worry about managing agent nodes. The ACI Connector is a Virtual Kubelet. Here, though, the IoT Edge Virtual Kubelet provider is placing the containers into edge devices that then look as if they’re part of the cluster.

Host the provider in a Kubernetes cluster and it shows up as just another node, George explained. “When you push containers to the IoT Edge provider we interpret that push as an IoT Edge deployment; we go through our Azure IoT Hub service and push that container out to an edge device. The Kubernetes master thinks it’s just got a big Kubernetes cluster running in the cloud but in fact, some of the containers are now running on edge devices.”

The provider is still in development and not all of the integrations are finished, but it promises a lot of flexibility. The big advantage of IoT Edge is bringing cloud services consistently to edge devices, and using Kubernetes makes that apply to more than the Azure services that are integrated with IoT Edge. You can use it to bring data models to edge devices, but if there’s a new version of, say, the compression algorithm you use, you can manage to deploy that to the cloud and edge systems in the same way.

The IoT Edge provider can push to multiple edge devices, using tags and device selector query to target the deployment to the correct edge devices. IoT devices with the same software configuration will often be connected to IoT hubs in different regions for performance or resiliency; creating a virtual Kubernetes node for each of those IoT hubs means the same kubectl command can apply deployment manifests to devices with the same configuration even though they’re attached to different hubs.

It doesn’t matter if those IoT devices are on slow connections, as long as they’re connected, George said. “The part that doesn’t work yet is when there are times when the edge device is disconnected. The Virtual Kubelet works great where there is connectivity to the edge at all time because it’s reporting back two-way communication with the health of each container.”

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.