Cloud native computing and edge computing represent two independent but important aspects of modern infrastructure. Cloud native computing is the second wave of cloud computing that delivers the best return on investment in cloud. Edge computing acts as a conduit between the cloud and Internet of Things (IoT) devices providing autonomous and intelligent computing to millions of connected devices and applications.
The rise of AI makes edge computing more important. Complex models trained in the cloud are deployed at the edge for inferencing.
Kubernetes has become the gold standard for orchestrating containerized workloads running in the data center and public cloud. In a short span of time, the cloud native ecosystem added multiple capabilities that made Kubernetes a robust and reliable platform to run both web-scale applications and enterprise line-of-business applications.
Public cloud vendors with investments in IoT platforms are extending their offering to the edge. Device registry, communication, deployment, and management of IoT applications are primarily run in the cloud with extended support for the edge. These vendors are now connecting the dots across the IoT, ML, and AI platforms to seamlessly push ML models from the cloud to the edge. Azure IoT Edge, AWS Greengrass, and Google Cloud IoT Edge are examples of edge platforms that extend public cloud. Startups such as FogHorn, Swim.ai, and Rigado are building multicloud, multiaccess edge computing platforms.
Kubernetes is fast becoming the universal scheduler for scheduling and managing resources that go beyond containers. The control plane of Kubernetes is designed to handle tens of thousands of containers running across hundreds of nodes. This architecture is well-suited to manage scalable, distributed edge deployments. Each edge computing device can be treated as a node while one or more connected devices can be mapped to pods. Developers and operators can use the familiar kubectl tool or Helm charts to push containerized IoT applications that run on one or more edge devices. This approach makes Kubernetes the control plane not just for containers but also for millions of devices managed through an autonomous edge computing layer.
A large ship might run multiple edge computing nodes that don’t talk to the control plane till they gain connectivity. This pattern is very different from the original design of Kubernetes master and worker nodes.
The cloud native community has been exploring the use of Kubernetes for IoT and edge computing. Microsoft attempted this through the Virtual Kubelet approach. Huawei has built an Intelligent Edge Fabric based on Kubernetes. In June 2018, Google, Huawei, Red Hat, and VMware started the IoT Edge Working Group to formalize the efforts. At KubeCon+CloudNativeCon 2018 in Seattle, Huawei presented KubeEdge, the official project to bring the power of Kubernetes to the edge.
KubeEdge is based on Huawei’s Intelligent Edge Fabric (IEF) — a commercial IoT edge platform based on Huawei IoT PaaS. A large part of IEF has been modified and open sourced for KubeEdge. Available in v0.2 release, KubeEdge is stable and complete to address the key use cases related to IoT and edge. It can be installed on a supported Linux distribution and on an ARM device like a Raspberry Pi.
As a fan of Kubernetes and IoT enthusiast, I am fascinated by the KubeEdge design and architecture. Unlike the nodes of a Kubernetes cluster, edge nodes will have to work in a completely disconnected mode. A large ship might run multiple edge computing nodes that don’t talk to the control plane till they gain connectivity. This pattern is very different from the original design of Kubernetes master and worker nodes.
KubeEdge elegantly tackles this problem through the combination of a message bus and data store that makes edge nodes autonomous and independent. The desired configuration stored in the control plane is synchronized with the local datastore of an edge device which gets cached till the next handshake. Same is the case of the current state of devices persisted in the datastore of the edge device.
KubeEdge takes advantage of Kubernetes primitives such as controllers and custom resource definitions. Like a Replication Controller and a StatefulSet Controller, there is an Edge Controller within the control plane that talks to the Edged runtime deployed in the device. This design makes it possible to use kubectl to manage edge deployments.
For machine-to-machine communication and duplex communication between the edge and the control plane, KubeEdge relies on Mosquitto, a popular open source MQTT broker from the Eclipse Foundation. The platform also supports device twin to maintain the state of IoT devices. SQLite is used as the datastore to persist the device twin state and the messages flowing back and forth from the edge to the control plane. WebSockets are used to enable the communication between the edge and the master nodes.
KubeEdge is the first step towards making Kubernetes the unified control plane for edge computing. It’s success vastly depends on the adoption by mainstream cloud providers including Amazon, Google, and Microsoft.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar to learn how to use Azure IoT Edge.
The Cloud Native Computing Foundation, which manages the Kubernetes project, and Kubecon+CloudNativeCon are sponsors of The New Stack.