TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Native Ecosystem / Kubernetes / Storage

Tutorial: Install and Configure Portworx on a Bare-Metal Kubernetes Cluster

How to install and configure a Portworx storage cluster on a three-node Kubernetes cluster running on bare metal (i.e. not a managed Kubernetes service).
Apr 3rd, 2020 12:03pm by
Featued image for: Tutorial: Install and Configure Portworx on a Bare-Metal Kubernetes Cluster

We looked at the architecture of Portworx in the last part of this series. In this installment, I will walk you through the steps involved in installing and configuring a Portworx storage cluster on a three-node Kubernetes cluster running on bare metal (i.e. not a managed Kubernetes service).

Exploring the Environment

I recently set up a lab with two bare-metal Kubernetes clusters running on Intel NUC machines. With each cluster running one master and three nodes, the machine configuration is identical across the nodes and clusters. Each Intel NUC is powered by an eighth-gen i7 CPU, 32GB RAM, and 256GB NVMe storage. I have also added 64GB external storage through the ThunderBolt/USB-C port.

We will install Portworx in one of the two clusters.

Let’s take a look at the storage configuration. The device /dev/sda is the external storage while the device /dev/nvme0n1 represents internal NVMe storage. Every node has the same partitioning scheme and storage configuration.

Our goal is to install Portworx to create two different storage pools for each of the storage types – external and internal.

Installing an etcd Cluster

Portworx relies on etcd database for maintaining the state of the storage cluster. The etcd cluster has to exist before Portworx is installed. We will install a three-node etcd cluster through the Bitnami Helm Chart.

Since we don’t have any overlay storage configured on the cluster, we will use Local Persistent Volume to create a PV pointing to /data/etcd directory on each node. Create this directory on each Worker Node.


The below YAML spec (pv-etcd.yaml) defines the Local PV for each node.


Apply the YAML spec to create three Local PVs exclusively associated with each Worker Node of the cluster.


A PVC associated with each of these PVs will also be created beforehand. It’s important to use the naming convention that matches the etcd StatefulSet. This will ensure that the Pods from the StatefulSet use existing PVCs that are already bound to the PVs.

Let’s create three PVCs bound to these PVs.



Make sure that the PVs are created and PVCs from the kube-system Namespace are bound to them.



With the PVCs in place, we are ready to create the etcd cluster. We will use a Helm 3 etcd Chart for this step.



Note that the Chart name (px-etcd) matches a part of the PVC (data-px-etcd-X). This is important to make sure that the Chart uses existing PVCs.

We are creating three Pods for the StatefulSet which will ensure that the etcd cluster is highly available.

Verify that the etcd cluster is up and running.


The etcd Pods and related objects are deployed in the kube-system Namespace which is also used by Portworx deployment.


The next step is to install the Portworx storage cluster.

Installing Portworx Storage Cluster

Sign up at Portworx hub to access the Portworx installation wizard. Once logged in, click on the new spec to launch the wizard.

The first step is to provide the version of Kubernetes and the details of the etcd cluster. Copy the ClusterIP of etcd service available within the kube-system namespace and paste it in the wizard’s etcd textbox. Don’t forget to append the port of the service.

In the next step, we will configure the storage environment. Select OnPrem and choose the manually specify disks option. Since our cluster is using /dev/sda and /dev/nvme0n1p1 devices, let’s input these values into the specification generator.

Leave the defaults in the network section and click next.

In the next step, choose None for Kubernetes distribution choices and click the Enable CSI checkbox. We will use the CSI-enabled features of Portworx in the upcoming tutorial.

In the last step, give a name to the spec and click on the copy button.

Apply the specification generated by the wizard.


In a few minutes, the Pods from the Portworx DaemonSet should be up and running.


The CSI driver is attached as a sidecar to each of the Pods in the DaemonSet which is why we see two containers in the Pods.

SSH into one of the nodes and check the Portworx cluster status.


The Portworx storage cluster has two pools created from the /dev/sda disks and /dev/nvme0n1p2 partitions.

In the next part of this tutorial, I will demonstrate how to leverage these storage pools to create a shared volume and a high I/O volume to deploy a fault-tolerant CMS workload. Stay tuned!

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live. And don’t forget to check out our first virtual pancake podcast, April 14, where Janakiram MSV will be a featured speaker: 

Portworx is a sponsor of The New Stack.

Feature image by Frank Eiffert on Unsplash.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Enable, The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.