Tutorial: Configure Cloud Native Edge Infrastructure with K3s, Calico, Portworx

In the previous part of this series, I introduced the core building blocks of cloud native edge computing stack: K3s, Project Calico, and Portworx.
This tutorial will walk you through the steps involved in installing and configuring this software on an edge cluster, a set of Intel NUC mini PCs running Ubuntu 18.04. This infrastructure can be used for running reliable, scalable, and secure AI and IoT workloads at the edge.
Customizing K3s Installation for Calico
By default, K3s will run with flannel as the Container Networking Interface (CNI), using VXLAN as the default backend. We will replace that with a CNI-compliant Calico.
To integrate Calico networking stack with K3s, we need to customize the installation to enable CNI support.
Note that you need at least three nodes running the K3s cluster at the edge for high availability.
On the first node designated as server, run the below commands.
1 |
export K3S_TOKEN="secret_edgecluster_token" |
1 |
export INSTALL_K3S_EXEC="--flannel-backend=none --disable=traefik --cluster-cidr=172.16.2.0/24 --cluster-init" |
1 |
curl -sfL https://get.k3s.io | sh - |
If 172.16.2.0/24 is already in use within your network you must select a different pod network CIDR by replacing 172.16.2.0/24 in the above command.
On the remaining server nodes, run the following commands. Note that we added the --server
switch to the installer pointing it to the IP address of the first node.
1 |
export K3S_TOKEN="secret_edgecluster_token" |
1 |
export INSTALL_K3S_EXEC="--flannel-backend=none --disable=traefik --cluster-cidr=172.16.2.0/24 --server https://10.0.0.60:6443" |
1 |
curl -sfL https://get.k3s.io | sh - |
To configure worker nodes or agents, run the following commands:
1 |
export K3S_URL=https://10.0.0.60:6443 |
1 |
export K3S_TOKEN="secret_edgecluster_token" |
1 |
curl -sfL https://get.k3s.io | sh - |
Replace K3S_URL
with the IP address of the K3s server.
At the end of this step, you should have a cluster with four nodes.
Since the network is not configured yet, none of these nodes are ready. As soon as we apply Calico specs to the cluster, the nodes will become ready.
☓
Before proceeding to the next step, copy /etc/rancher/k3s/k3s.yaml
from one of the server nodes to your local workstation and point the KUBECONFIG
environment variable to that. Don’t forget to update the master URL in the YAML file. This provides remote access to the K3s cluster through kubectl
CLI.
Installing Calico on the Multinode K3s Cluster
We will start by downloading the Calico manifests and modifying them.
1 |
wget https://docs.projectcalico.org/manifests/tigera-operator.yaml |
1 |
wget https://docs.projectcalico.org/manifests/custom-resources.yaml |
Open custom-resources.yaml
file and change the CIDR to the same IP address range mentioned during the K3s installation.
Apply both the manifests to configure the Calico network for the K3s cluster.
1 |
kubectl create -f tigera-operator.yaml |
1 |
kubectl create -f custom-resources.yaml |
In a few minutes, the cluster becomes ready.
Finally, modify the cni-config
configmap in calico-system
namespace to enable IP forwarding.
1 |
kubectl edit cm cni-config -n calico-system |
Change the value shown below to enable IP forwarding.
1 2 3 |
"container_settings": { "allow_ip_forwarding": true } |
Verify that Calico is up and running with the below command:
1 |
kubectl get pods -n calico-system |
Installing Portworx on K3s
Portworx 2.6 or above supports K3s distribution. The installation process on K3s is not different from other flavors of Kubernetes. Follow the steps mentioned in the tutorial on installing Portworx on a bare-metal cluster.
If you don’t have an etcd cluster handy, you can choose the built-in KVDB in the PX-Central installation wizard.
I chose the NVMe disk attached to each host for the storage option. Modify this based on your storage configuration.
One of the important prerequisites for K3s is the support for CSI. Make sure you select Enable CSI
option in the last step.
Copy the specification and apply it to your cluster.
In a few minutes, the Portworx storage cluster on K3s will be up and running.
1 |
kubectl get pods -l name=portworx -n kube-system |
The CSI driver is attached as a sidecar to each of the Pods in the DaemonSet which is why we see two containers in the Pods.
SSH into one of the nodes and check the Portworx cluster status with the below command.
1 |
sudo /opt/pwx/bin/pxctl status |
We now have a fully configured edge infrastructure based on K3s, Calico, and Portworx. In the next part of this series, we will deploy an AIoT workload running at the edge.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.
Portworx is a sponsor of The New Stack.
Feature Image by Uwe Baumann from Pixabay.