TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%

Tutorial: Install a Highly Available K3s Cluster at the Edge

Aug 21st, 2020 11:30am by
Featued image for: Tutorial: Install a Highly Available K3s Cluster at the Edge
Feature image via Pixabay.

This is the last part of the tutorial in the K3s series. In the previous tutorial, we have seen how to set up a multinode etcd cluster. We will leverage the same infrastructure for setting up and configuring a highly available Kubernetes cluster based on K3s.

Kubernetes Clusters in High Availability Mode

The control plane of the Kubernetes cluster is mostly stateless. The only stateful component of the control plane is the etcd database, which acts as the single source of truth for the entire cluster. The API server acts as the gateway to the etcd database through which both internal and external consumers access and manipulate the state.

It is important that the etcd database is configured in HA mode to ensure that there is no single point of failure. There are two options for configuring the topology of a highly available (HA) Kubernetes clusters that depend on how etcd is setup.

The first topology is based on the stacked cluster design where each node runs an etcd instance along with the control plane. Each control plane node runs an instance of the kube-apiserver, kube-scheduler, and kube-controller-manager. The kube-apiserver is exposed to worker nodes using a load balancer.

Each control plane node creates a local etcd member and this etcd member communicates only with the kube-apiserver of this node. The same applies to the local kube-controller-manager and kube-scheduler instances.

This topology demands a minimum of three stacked control plane modes for a HA Kubernetes cluster. Kubeadm, the popular cluster installation tool uses this topology to configure a Kubernetes cluster.

The second topology uses an external etcd cluster installed and managed on a completely different set of hosts.

In this topology, each control plane node runs an instance of the kube-apiserver, kube-scheduler, and kube-controller-manager where each etcd host communicates with the kube-apiserver of each control plane node.

This topology requires twice the number of hosts as the stacked HA topology. A minimum of three hosts for control plane nodes and three hosts for etcd nodes are required for an HA cluster with this topology.

For more information on bootstrapping a cluster, refer to the official Kubernetes documentation.

K3s in a Highly Available Mode

Since K3s is mostly deployed at the edge with limited hardware resources, it may not be possible to run the etcd database on dedicated hosts. The deployment architecture closely mimics the stacked topology except that the etcd database is configured beforehand.

For this walkthrough, I am using bare-metal infrastructure running on Intel NUC hardware with the below mapping:

Refer to the previous part of this tutorial series to install and configure etcd on the first three nodes with IP addresses 10.0.0.60, 10.0.0.61, and 10.0.0.62.

Installing K3s Servers

Let’s start by installing the servers in all the nodes where etcd is installed. SSH into the first node, and set the below environment variables. This assumes that you followed the steps explained in the previous tutorial to configure the etcd cluster.


These environment variables instruct K3s installer to utilize the existing etcd database for state management.

Next, we will populate the K3S_TOKEN with a token that’s used by the agents to join the cluster.


We are ready to install the server in the first node. Run the below command to start the process.


Repeat these steps in node-2 and node-3 to launch additional servers.

At this point, you have a three-node K3s cluster that runs the control plane and etcd components in a highly available mode.


You can check the status of the service with the below command:


Installing K3s Agents

With the control plane up and running, we can easily add worker nodes or agents to the cluster. We just need to make sure that we use the same token that was associated with the servers.

SSH into one of the worker nodes and run the commands.


The environment variable, K3S_URL is a hint to the installer to configure the node as an agent connected to an existing server.

Finally, run the same script as we did in the previous step.


Check if the new node is added to the cluster.

Congratulations! You have successfully installed a highly available K3s cluster backed an external etcd database.

Verifying the etcd Database

Let’s make sure that the K3s cluster is indeed using the etcd database for state management.

We will launch a simple Nginx pod in the K3s cluster.



The pod spec and the status should be stored in the etcd database. Let’s try to retrieve that through the etcdctl CLI. Install the jq utility to parse the JSON output.

Since the output is encoded in base64, we will decode it via the base64 tool.


The output shows that the pod has an associated key and value in the etcd database. The special characters are not shown correctly but it does show us enough data about the pod.

This tutorial series demonstrates how to set up and configure Rancher Labs’ K3s at the edge in a highly available mode.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.