Tutorial: Deploy PostgreSQL on Kubernetes Running the OpenEBS Storage Engine

In the last part of this series, I covered the steps to install OpenEBS on the Amazon Elastic Kubernetes Service (Amazon EKS). In this tutorial, we will deploy a highly available instance of PostgreSQL.
Create a Storage Class for PostgreSQL
We will first create a storage class based on the storage pool claim configured in the last tutorial.
1 |
kubectl get spc |
1 2 3 4 5 6 7 8 9 10 11 12 |
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cstor-sc annotations: openebs.io/cas-type: cstor cas.openebs.io/config: | - name: StoragePoolClaim value: "cstor-disk-pool" - name: ReplicaCount value: "3" provisioner: openebs.io/provisioner-iscsi |
The ReplicaCount key will ensure that the data is written across three nodes to add redundancy.
1 |
kubectl apply -f cstor-sc.yaml |
1 |
kubectl get sc |
Deploying PostgreSQL through a Helm Chart
We will now deploy PostgreSQL backed by OpenEBS. By pointing the persistence.storageClass to the cStor storage class created in the last step, the deployment will dynamically create a Persistent Volume (PV) and Persistent Volume Claim (PVC)
1 |
helm update |
1 2 |
helm install demo stable/postgresql \ --set persistence.storageClass=cstor-sc |
Let’s verify the Pod, PVC, and PV associated with the deployment.
1 |
kubectl get pods |
1 |
kubectl get pvc |
1 |
kubectl get pv |
Creating Test Data
Access the PgSQL client to create a test database, table, and adding a row.
First, let’s retrieve the password from the deployment.
1 |
export POSTGRES_PASSWORD=$(kubectl get secret --namespace default demo-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) |
1 |
kubectl run pgsql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql testdb --host demo-postgresql -U postgres -d postgres -p 5432 |
1 |
CREATE DATABASE inventory; |
1 |
\c inventory |
1 2 3 4 5 |
CREATE TABLE products ( product_no integer, name text, price numeric ); |
INSERT INTO products VALUES (1, ‘Cheese’, 9.99);
1 |
SELECT * FROM products; |
Simulating a Node Failure
Let’s find the node running the PostgreSQL database Pod and cordon it off which will prevent new Pods from being scheduled on it.
1 |
kubectl get pods -o wide |
1 |
kubectl cordon ip-192-168-71-85.ap-south-1.compute.internal |
1 |
kubectl get nodes |
Finally, we will delete the Pod running on the cordoned node.
1 |
kubectl delete pod demo-postgresql-0 |
Verifying the Data in the new Pod
As soon as the Pod is deleted, the Kubernetes controller will create a new Pod and schedules it in a different node. It cannot be placed on the same node as the scheduling is disabled after cordoning it.
Even though the PVC has the access mode as ReadWriteOnce which is mounted by a specific node for read-write access, the Pod is able to target the same PVC through the cStor storage pool which abstracted the underlying EBS volumes into a single storage layer.
Now, let’s connect to the new Pod and check if the data is intact.
1 |
kubectl run pgsql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql testdb --host demo-postgresql -U postgres -d postgres -p 5432 |
1 2 |
\c inventory select * from products; |
The data is intact even after deleting the Pod and rescheduling it on a different node. This confirms that the replication factor of OpenEBS is working properly.
In the next part of this series, we will see how to take volume snapshots and backup the state of the workload. Stay tuned!
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.