Tutorial: Install and Configure Portworx on a Bare-Metal Kubernetes Cluster

We looked at the architecture of Portworx in the last part of this series. In this installment, I will walk you through the steps involved in installing and configuring a Portworx storage cluster on a three-node Kubernetes cluster running on bare metal (i.e. not a managed Kubernetes service).
Exploring the Environment
I recently set up a lab with two bare-metal Kubernetes clusters running on Intel NUC machines. With each cluster running one master and three nodes, the machine configuration is identical across the nodes and clusters. Each Intel NUC is powered by an eighth-gen i7 CPU, 32GB RAM, and 256GB NVMe storage. I have also added 64GB external storage through the ThunderBolt/USB-C port.
We will install Portworx in one of the two clusters.
Let’s take a look at the storage configuration. The device /dev/sda is the external storage while the device /dev/nvme0n1 represents internal NVMe storage. Every node has the same partitioning scheme and storage configuration.
Our goal is to install Portworx to create two different storage pools for each of the storage types – external and internal.
Installing an etcd Cluster
Portworx relies on etcd database for maintaining the state of the storage cluster. The etcd cluster has to exist before Portworx is installed. We will install a three-node etcd cluster through the Bitnami Helm Chart.
Since we don’t have any overlay storage configured on the cluster, we will use Local Persistent Volume to create a PV pointing to /data/etcd directory on each node. Create this directory on each Worker Node.
1 2 |
sudo mkdir -p /data/etcd sudo chmod 771 /data/etcd |
The below YAML spec (pv-etcd.yaml) defines the Local PV for each node.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
apiVersion: v1 kind: PersistentVolume metadata: name: etcd-vol-0 spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /data/etcd nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - j1-node-1 --- apiVersion: v1 kind: PersistentVolume metadata: name: etcd-vol-1 spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /data/etcd nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - j1-node-2 --- apiVersion: v1 kind: PersistentVolume metadata: name: etcd-vol-2 spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /data/etcd nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - j1-node-3 |
Apply the YAML spec to create three Local PVs exclusively associated with each Worker Node of the cluster.
1 |
kubectl apply -f pv-etcd.yaml |
A PVC associated with each of these PVs will also be created beforehand. It’s important to use the naming convention that matches the etcd StatefulSet. This will ensure that the Pods from the StatefulSet use existing PVCs that are already bound to the PVs.
Let’s create three PVCs bound to these PVs.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-px-etcd-0 spec: storageClassName: local-storage accessModes: - ReadWriteOnce resources: requests: storage: 1Gi volumeName: "etcd-vol-0" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-px-etcd-1 spec: storageClassName: local-storage accessModes: - ReadWriteOnce resources: requests: storage: 1Gi volumeName: "etcd-vol-1" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-px-etcd-2 spec: storageClassName: local-storage accessModes: - ReadWriteOnce resources: requests: storage: 1Gi volumeName: "etcd-vol-2" |
1 |
kubectl apply -f pvc-etcd.yaml -n kube-system |
Make sure that the PVs are created and PVCs from the kube-system Namespace are bound to them.
1 |
kubectl get pv |
1 |
kubectl get pvc -n kube-system |
With the PVCs in place, we are ready to create the etcd cluster. We will use a Helm 3 etcd Chart for this step.
1 |
helm repo add bitnami https://charts.bitnami.com/bitnami |
1 2 3 4 |
helm install px-etcd bitnami/etcd \ --set statefulset.replicaCount=3 \ --set auth.rbac.enabled=false \ --namespace=kube-system |
Note that the Chart name (px-etcd) matches a part of the PVC (data-px-etcd-X). This is important to make sure that the Chart uses existing PVCs.
We are creating three Pods for the StatefulSet which will ensure that the etcd cluster is highly available.
Verify that the etcd cluster is up and running.
1 |
kubectl get pods -l app.kubernetes.io/name=etcd -n=kube-system |
The etcd Pods and related objects are deployed in the kube-system Namespace which is also used by Portworx deployment.
1 |
kubectl get svc -l app.kubernetes.io/name=etcd -n=kube-system |
The next step is to install the Portworx storage cluster.
Installing Portworx Storage Cluster
Sign up at Portworx hub to access the Portworx installation wizard. Once logged in, click on the new spec to launch the wizard.
The first step is to provide the version of Kubernetes and the details of the etcd cluster. Copy the ClusterIP of etcd service available within the kube-system namespace and paste it in the wizard’s etcd textbox. Don’t forget to append the port of the service.
In the next step, we will configure the storage environment. Select OnPrem and choose the manually specify disks option. Since our cluster is using /dev/sda and /dev/nvme0n1p1 devices, let’s input these values into the specification generator.
Leave the defaults in the network section and click next.
In the next step, choose None for Kubernetes distribution choices and click the Enable CSI checkbox. We will use the CSI-enabled features of Portworx in the upcoming tutorial.
In the last step, give a name to the spec and click on the copy button.
Apply the specification generated by the wizard.
1 |
kubectl apply -f 'https://install.portworx.com/2.3?mc=false&kbver=1.17.1&k=etcd%3Ahttp%3A%2F%2F10.233.10.70%3A2379&s=%2Fdev%2Fsda%2C%2Fdev%2Fnvme0n1p1&c=px-cluster-0ffa68c6-34ae-4476-97d6-26888957b329&stork=true&csi=true&lh=true&st=k8s' |
In a few minutes, the Pods from the Portworx DaemonSet should be up and running.
1 |
kubectl get pods -n kube-system -l name=portworx |
The CSI driver is attached as a sidecar to each of the Pods in the DaemonSet which is why we see two containers in the Pods.
SSH into one of the nodes and check the Portworx cluster status.
1 |
pxctl status |
The Portworx storage cluster has two pools created from the /dev/sda disks and /dev/nvme0n1p2 partitions.
In the next part of this tutorial, I will demonstrate how to leverage these storage pools to create a shared volume and a high I/O volume to deploy a fault-tolerant CMS workload. Stay tuned!
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live. And don’t forget to check out our first virtual pancake podcast, April 14, where Janakiram MSV will be a featured speaker:
Portworx is a sponsor of The New Stack.
Feature image by Frank Eiffert on Unsplash.