How K3s, Portworx, and Calico Can Serve as a Foundation of Cloud Native Edge Infrastructure

Kubernetes is finding its way from the cloud to the edge via the data center. During the early days, Kubernetes was considered for hyperscale workloads running in the public cloud. Within a few years, enterprises started to adopt Kubernetes in the data center. It eventually became the consistent and unified infrastructure layer to run workloads in hybrid cloud and multicloud environments.
The rise of the Internet of Things and AI prompted the industry in moving the compute capabilities closer to the data which has become the edge computing layer.
Edge computing is an intermediary between the devices and the cloud or data center. It applies business logic to the data ingested by devices while providing analytics in real-time. It acts as a conduit between the origin of the data and the cloud, which dramatically reduces the latency that may occur due to the roundtrip to the cloud. Since the edge can process and filter the data that needs to be sent to the cloud, it also reduces the bandwidth cost. Finally, edge computing will help organizations with data locality and sovereignty through local processing and storage.
Edge computing exposes the essential services of cloud platforms such as data ingestion, data processing, stream analytics, storage, device management, and machine learning inference.
Kubernetes is fast becoming the preferred infrastructure for edge computing. The promise of agility, scale, and security is getting extended to the edge infrastructure. Modern software delivery mechanisms based on CI/CD and GitOps make it easy to manage the applications running at the edge. Tens of thousands of Kubernetes clusters deployed at edge locations are managed by meta control planes such as Anthos, Arc, Tanzu, and Rancher.
The Building Blocks of Edge
Customers planning to run Kubernetes at the edge don’t have many choices. They have to assemble the stack from the best of the breed open source and commercial software from the cloud native ecosystem.
The commercial Kubernetes distributions are not optimized to run in resource-constrained environments. The Kubernetes distribution deployed at the edge should have a smaller footprint that doesn’t compromise on the standard API conformance and compatibility.
Storage at the edge is one of the key building blocks of the infrastructure. It has to support the diverse needs of the stateful workloads dealing with unstructured datasets, NoSQL databases, and shared file systems. It should have the ability to take periodic snapshots of data and store it in the cloud. Advanced capabilities such as migration and disaster recovery make the edge computing layer resilient.
The network layer should provide security and isolation for workloads running at the edge. In the majority of the scenarios, the edge infrastructure is shared by multiple groups. For instance, in smart buildings use case, the same edge cluster may run workloads for each floor of the building. Cluster administrators should be able to apply network policies that prevent applications running in one namespace from accessing the application data in another namespace. The network layer should provide security through intrusion detection and declarative policies.
K3s – The Kubernetes Distribution for the Edge
K3s from Rancher Labs is a flavor of Kubernetes that is highly optimized for the edge. Though K3s is a simplified, miniature version of Kubernetes, it doesn’t compromise the API conformance and functionality.
From kubectl to Helm to Kustomize, almost all the tools of the cloud native ecosystem seamlessly work with K3s. In fact, K3s is a CNCF-certified, conformant Kubernetes distribution ready to be deployed in production environments. More recently, K3s joined the CNCF to become a Sandbox project. Almost all the workloads that run in a cluster based on upstream Kubernetes distribution are guaranteed to work on a K3s cluster.
K3s effectively tackles the compute layer by orchestrating the infrastructure and workloads running at the edge.
Portworx – The Container-Native Storage Layer
Portworx is a software-defined storage platform built for containers and microservices. It abstracts multiple storage devices to expose a unified, overlay storage layer to cloud native applications.
One of the key differentiating factors of Portworx is container-granular storage volumes. Unlike other storage offerings, Portworx exposes a unified overlay storage layer that can be adapted to different use cases. For example, storage administrators can define a storage class meant for running a NoSQL database in a highly available mode while creating another storage class for shared volumes. Both scenarios are based on the same storage backend without the need to manage two different storage layers.
An edge computing layer deals with a variety of workloads including streaming, data storage, analytics, complex event processing, and AI inference. Some of these workloads demand dedicated storage volumes while others need shared volumes. For example, multiple pods serving AI inference will share the same storage volume populated with the ML model. At the same time, a message broker demands a dedicated volume to persist messages.
Portworx removes the pain of managing multiple storage layers through a unified approach. Some of the capabilities such as snapshots, scheduled backups, migration, integrated RBAC, and predictive capacity planning makes Portworx an ideal choice for the edge.
Portworx 2.6, the most recent version, officially supports K3s clusters.
Project Calico – Secure Network for the Edge
Project Calico brings fine-grained network policies to Kubernetes. While Kubernetes has extensive support for Role-Based Access Control (RBAC), the default networking stack available in the upstream Kubernetes distribution doesn’t support fine-grained network policies. Project Calico provides fine-grain control by allowing and denying the traffic to Kubernetes workloads.
It’s a common practice for DevOps to logically group applications into a Kubernetes namespace. In an edge computing scenario, a K3s cluster may be running multiple workloads separated by namespaces. Project Calico enables strong isolation of the namespaces through declarative policies. Through these policies, the data streamed by sensors will be ingested and processed by applications that are authorized.
Project Calico comes with in-built intrusion detection that can identify suspicious activity. The multicluster management with multicloud federation makes it easy to manage distributed edge infrastructure from a single pane of glass.
With minor changes to the installation process, Calico can be easily integrated with K3s.
In the next tutorial, I will walk you through the steps involved in configuring an edge cluster based on K3s, Portworx, and Calico. In the subsequent parts, we will also explore an AIoT deployment that takes advantage of the stack. Stay tuned!
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.