TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Cloud Native Ecosystem / Edge Computing / Kubernetes

Tutorial: Configure Cloud Native Edge Infrastructure with K3s, Calico, Portworx

This tutorial will walk you through the steps involved in installing and configuring this software on an edge cluster, a set of Intel NUC mini PCs running Ubuntu 18.04. This infrastructure can be used for running reliable, scalable, and secure AI and IoT workloads at the edge.
Sep 18th, 2020 10:10am by
Featued image for: Tutorial: Configure Cloud Native Edge Infrastructure with K3s, Calico, Portworx

In the previous part of this series, I introduced the core building blocks of cloud native edge computing stack: K3s, Project Calico, and Portworx.

This tutorial will walk you through the steps involved in installing and configuring this software on an edge cluster, a set of Intel NUC mini PCs running Ubuntu 18.04. This infrastructure can be used for running reliable, scalable, and secure AI and IoT workloads at the edge.

Customizing K3s Installation for Calico

By default, K3s will run with flannel as the Container Networking Interface (CNI), using VXLAN as the default backend. We will replace that with a CNI-compliant Calico.

To integrate Calico networking stack with K3s, we need to customize the installation to enable CNI support.

Note that you need at least three nodes running the K3s cluster at the edge for high availability.

On the first node designated as server, run the below commands.




If 172.16.2.0/24 is already in use within your network you must select a different pod network CIDR by replacing 172.16.2.0/24 in the above command.

On the remaining server nodes, run the following commands. Note that we added the --server switch to the installer pointing it to the IP address of the first node.




To configure worker nodes or agents, run the following commands:




Replace K3S_URL with the IP address of the K3s server.

At the end of this step, you should have a cluster with four nodes.

Since the network is not configured yet, none of these nodes are ready. As soon as we apply Calico specs to the cluster, the nodes will become ready.

Before proceeding to the next step, copy /etc/rancher/k3s/k3s.yaml from one of the server nodes to your local workstation and point the KUBECONFIG environment variable to that. Don’t forget to update the master URL in the YAML file. This provides remote access to the K3s cluster through kubectl CLI.

Installing Calico on the Multinode K3s Cluster

We will start by downloading the Calico manifests and modifying them.



Open custom-resources.yaml file and change the CIDR to the same IP address range mentioned during the K3s installation.

Apply both the manifests to configure the Calico network for the K3s cluster.



In a few minutes, the cluster becomes ready.

Finally, modify the cni-config configmap in calico-system namespace to enable IP forwarding.


Change the value shown below to enable IP forwarding.


Verify that Calico is up and running with the below command:


Installing Portworx on K3s

Portworx 2.6 or above supports K3s distribution. The installation process on K3s is not different from other flavors of Kubernetes. Follow the steps mentioned in the tutorial on installing Portworx on a bare-metal cluster.

If you don’t have an etcd cluster handy, you can choose the built-in KVDB in the PX-Central installation wizard.

I chose the NVMe disk attached to each host for the storage option. Modify this based on your storage configuration.

One of the important prerequisites for K3s is the support for CSI. Make sure you select Enable CSI option in the last step.

Copy the specification and apply it to your cluster.

In a few minutes, the Portworx storage cluster on K3s will be up and running.


The CSI driver is attached as a sidecar to each of the Pods in the DaemonSet which is why we see two containers in the Pods.

SSH into one of the nodes and check the Portworx cluster status with the below command.


We now have a fully configured edge infrastructure based on K3s, Calico, and Portworx. In the next part of this series, we will deploy an AIoT workload running at the edge.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Portworx is a sponsor of The New Stack.

Feature Image by Uwe Baumann from Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Enable, The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.