Implement Postgres on Kubernetes with Ondat and SUSE Rancher

We previously explored the challenges of running public cloud DBaaS and benefits of running your own Database as a Service (DBaaS) on Kubernetes using SUSE Rancher and Ondat. Now, we’re going to take the theory and make it practical.
This article will take you step by step through how to implement a PostgreSQL DBaaS using the excellent operator by the folks at Percona. We’ll be deploying this on a Rancher Kubernetes Engine (RKE) cluster on DigitalOcean, but the instructions should work fine for almost every Kubernetes distribution.
Prerequisites
This tutorial was written and tested on a macOS and Linux workstation, so we will assume you are using one or the other. In addition, you’ll need the following utilities installed locally through your preferred package manager:
In this tutorial, we will be using DigitalOcean as our cloud provider to provision droplets, using Terraform, that will be used as master and worker nodes for our RKE workload cluster. If you don’t have a DigitalOcean account, you can sign up for one and start your free trial with $200 credit for 60 days.
Lastly, I would also recommend a beverage to go with this tutorial. I’ll be drinking tea, but feel free to substitute it with the drink of your choice.
Once you have met the tutorial prerequisites, are sitting comfortably, and have a nice drink, let’s begin!
Step 1: Provision a SUSE Rancher Server and RKE Cluster on DigitalOcean
Before we go any further, we need an RKE cluster. We could do this with lots of manual keyboard mashing, but that’s tedious; instead, we will use Terraform to automate this process. We’ve already created the Terraform code you’ll need for this tutorial, so let’s go ahead and clone it with the following commands:
1 2 3 4 5 |
# Clone the repository. git clone git@github.com:ondat/demos.git # Navigate into the “do/” directory. cd demos/rke-ondat/digitalocean/ |
- This directory has Terraform configuration files that will be used to provision the following elements:
- a single node K3s cluster for the Rancher server,
- and a highly available RKE cluster with three master nodes and five worker nodes.
- There is also a Terraform variable file for you called
terraform.tfvars.example
. We want Terraform to use it when we are provisioning our resources, so go ahead and rename it with the following command:
1 2 |
# Rename “terraform.tfvars.example”. mv terraform.tfvars.example terraform.tfvars |
- Next, insert your DigitalOcean personal access token and set a password for Rancher. Open the file with your preferred text editor, and find and set the following values:
1 2 3 4 5 |
# DigitalOcean API token used to create infrastructure. do_token = "" # Admin password to use for Rancher server bootstrap. rancher_server_admin_password = "" |
- Once you’ve saved the
terraform.tfvars
file, initialize Terraform with the following command:
1 2 |
# Initialize the working directory containing the configuration files. terraform init |
- This command downloads the various modules and providers it needs to perform the automation. Next, we will validate our Terraform code and then create a plan. The
terraform plan
command allows us to preview the changes that Terraform will make to your platform when you apply it. Validate and run a plan against the Terraform code using the following commands:
1 2 3 4 5 |
# Validate the configuration files in the working directory. terraform validate # Create an execution plan. terraform plan |
- This will create a list of all the resources that Terraform will create when you
apply
it, and it’s important to check you are happy with what it’s about to do. Once you’ve reviewed the plan, use the following command to apply it:
1 2 |
# Execute the actions proposed in a plan created earlier. terraform apply -auto-approve |
- This might take some time, so this is an excellent point to top up your chosen beverage! Once provisioning is complete, you will see an output similar to the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
Apply complete! Resources: 21 added, 0 changed, 0 destroyed. Outputs: rancher_server_node_ip = "xxx.xxx.xxx.xxx" rancher_server_url = "https://rancher.xxx.xxx.xxx.xxx.sslip.io" rke_master_node_1_ip = "xxx.xxx.xxx.xxx" rke_master_node_2_ip = "xxx.xxx.xxx.xxx" rke_master_node_3_ip = "xxx.xxx.xxx.xxx" rke_worker_node_1_ip = "xxx.xxx.xxx.xxx" rke_worker_node_2_ip = "xxx.xxx.xxx.xxx" rke_worker_node_3_ip = "xxx.xxx.xxx.xxx" rke_worker_node_4_ip = "xxx.xxx.xxx.xxx" rke_worker_node_5_ip = "xxx.xxx.xxx.xxx" |
- At this point, you should have a fully working Rancher and RKE cluster. Go ahead and test them by first going to the Rancher UI. To access it, copy and paste the value from your
terraform apply
, namedrancher_server_url
. This should open the Rancher login page. To log in, use the username ofadmin
and the password you set for therancher_server_admin_password
in theterraform.tfvars
file earlier.
- Once you’ve finished admiring the Rancher UI, you can check that your RKE cluster is also working correctly using
kubectl
. First, however, you need yourkubeconfig
file to ensure thatkubectl
knows which cluster to interact with. You can put yourkubeconfig
file in the correct place with the following commands:
1 2 |
# Copy the generated kubeconfig file cp kube_config_workload.yaml ~/.kube/config |
- Now, we can inspect the nodes and pods running in our RKE cluster using the following command:
1 2 3 |
# Inspect the nodes and pods. kubectl get nodes kubectl get pods --all-namespaces |
- This should bring back a list of master and worker nodes and a few core system pods. You may find that not all your nodes are showing; it can take some time for the nodes to register with the Rancher server, so check back later if you are missing some.
- You can also check the status of the node registrations through the Rancher UI by reviewing the “Cluster Management” tab. Alternatively, periodically execute
kubectl get nodes
until all eight nodes are in aReady
state.
Step 2: Deploy and Configure Ondat
To install Ondat onto your newly provisioned RKE cluster, log in or create an account on Ondat’s Portal, which will generate the correct installation commands and allow you to register your cluster to get your free Ondat Community Edition license.
- In the Ondat Portal UI Dashboard, select the Install Ondat on your cluster option. You will be redirected to the Cluster tab where under the Cluster Name text box, name your cluster and select the Rancher option for your Kubernetes distribution.
- You will notice that the portal will display prerequisites that will need to be applied before moving onto the next page. Be sure to satisfy the prerequisites, especially the installation of the Local Path Provisioner utility, which is used to provide storage for Ondat’s dedicated
etcd
cluster inside your RKE cluster. When you have met the prerequisites for deploying Ondat, click the Next button to continue to the next step to get the installation commands.
- The next page will provide you with
helm
commands that will add the Ondat Helm chart repository to your local index, update the repository local index and then install Ondat onto your RKE cluster. Go ahead and copy the commands, paste them into your terminal and press Enter to execute them.

- The installation will take a few minutes. When it has finished, you can use the following command to check that Ondat resources are running:
1 2 3 |
# Inspect Ondat resources that have been created. kubectl get all --namespace=storageos kubectl get storageclasses storageos |
- This will return a list of all the components and resources that Ondat has installed into the cluster.
- Now that you have an Ondat cluster, we want to create an Ondat
StorageClass
and make the defaultStorageClass
for our cluster. We’re going to use feature labels to enable the following capabilities:
- First, use the labels to create custom
regions
. Let’s create them with the following command:
1 2 3 4 5 6 7 8 9 |
# Label the worker nodes to define custom regions for the TAP feature. kubectl label node rke-ondat-demo-worker-node-1 custom-region=1 kubectl label node rke-ondat-demo-worker-node-2 custom-region=2 kubectl label node rke-ondat-demo-worker-node-3 custom-region=3 kubectl label node rke-ondat-demo-worker-node-4 custom-region=1 kubectl label node rke-ondat-demo-worker-node-5 custom-region=2 # Check that the worker nodes have been labeled successfully. kubectl describe nodes | grep --context=4 "custom-region" |
- Next, create a new Ondat
StorageClass
. Note the parameters section; this is where we’re setting some crucial configurations such as the number of replicas and our topology keys:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# Create a customized Ondat StorageClass named “ondat-replication-encryption”. kubectl create --filename -<<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ondat-replication-encryption provisioner: csi.storageos.com allowVolumeExpansion: true parameters: csi.storage.k8s.io/fstype: ext4 storageos.com/replicas: "2" storageos.com/encryption: "true" storageos.com/topology-aware: "true" storageos.com/topology-key: "custom-region" csi.storage.k8s.io/secret-name: storageos-api csi.storage.k8s.io/secret-namespace: storageos EOF |
- Finally, we’re going to set our new Ondat
StorageClass
to be the default within our RKE cluster. This ensures that any Kubernetes workload that requests storage gets an Ondat volume by default:
1 2 3 4 5 6 7 8 |
# Unmark the “local-path” StorageClass from being the default StorageClass for the cluster first. kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' # Mark the “ondat-replication-encryption” StorageClass as the new default StorageClass for the cluster. kubectl patch storageclass ondat-replication-encryption -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' # Inspect the StorageClass and ensure it's now the default. kubectl get storageclasses ondat-replication-encryption |
Step 3: Deploy Percona’s Distribution for PostgreSQL Operator
- In Step 1 and Step 2 we created an RKE cluster, then installed and configured Ondat. Now, we’re going to create the service part of our DBaaS using the PostgreSQL Kubernetes operator developed and maintained by Percona. Use the following command to create a namespace and deploy the operator into it:
1 2 3 4 5 6 7 8 |
# Create a namespace for the operator. kubectl create namespace pgo # Deploy the PostgreSQL operator. kubectl --namespace pgo apply --filename https://raw.githubusercontent.com/percona/percona-postgresql-operator/main/deploy/operator.yaml # Inspect that the pod's status is in a “Running” state. kubectl get pods --namespace=pgo |
- Once the operator is running, you can create a new PostgreSQL database with some simple Kubernetes native YAML. You can find some excellent examples in the Percona Operator for PostgreSQL GitHub repository, and we’ve also created an example in the code that’s included with this tutorial. Our example will create a PostgreSQL cluster with PGBouncer and PGBadger.
- Run the following command to deploy the PostgreSQL cluster included with our code:
1 2 3 4 5 |
# Deploy the database cluster. kubectl --namespace=pgo apply --filename=../../workloads/percona-postgresql/cr.yaml # Inspect that the resources have been successfully created in the “pgo” namespace. kubectl get pods --namespace=pgo |
- You now can create PostgreSQL databases at will or give developers the ability to include a PostgreSQL database with applications deployed into a Kubernetes cluster with this operator.
Step 4: Exploring Ondat’s Replication and Data Encryption Features
-
- We now have a DBaaS, but we need to ensure that best practices around reliability and security are in place. Ondat handles this for you, and in this section, we’re going to simulate some failures to see it in action, particularly the fast replication capability.
-
- To make interacting with the Ondat cluster easier, start by deploying the Ondat CLI into the RKE cluster. Use the following command to deploy it:
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
kubectl create --filename -<<EOF apiVersion: apps/v1 kind: Deployment metadata: labels: app: storageos-cli app.kubernetes.io/component: storageos-cli app.kubernetes.io/part-of: storageos kind: storageos name: storageos-cli namespace: storageos spec: replicas: 1 selector: matchLabels: app: storageos-cli template: metadata: labels: app: storageos-cli spec: containers: - command: - /bin/sh - -c - while true; do sleep 3600; done env: - name: STORAGEOS_USERNAME valueFrom: secretKeyRef: name: storageos-api key: username optional: false - name: STORAGEOS_PASSWORD valueFrom: secretKeyRef: name: storageos-api key: password optional: false - name: STORAGEOS_ENDPOINTS value: storageos:5705 image: storageos/cli:v2.9.0 imagePullPolicy: Always name: cli ports: - containerPort: 5705 resources: limits: cpu: 100m memory: 128Mi requests: cpu: 50m memory: 32Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true EOF |
-
-
- To use the CLI we will need to find the CLI pod’s name. Use the following command to get it:
-
1 2 |
# Get the Ondat CLI pod name kubectl get pods --namespace storageos | grep "storageos-cli" |
-
-
- We’re going to use the CLI a fair amount in this section, so copy the pod name somewhere you can get to easily.
-
Ondat Durable Replication
When you created your database back in Step 3, it requested storage from the default StorageClass
in your cluster — which, thanks to the steps we carried out in Step 2, defaulted to the custom ondat-replication-encryption
StorageClass
. Therefore, if you used the default configuration in the example, Ondat has automatically created two replica volumes for each volume requested by the database operator.
-
-
- You can review the volumes created by running the following commands:
-
1 2 |
# Get the volumes in the “pgo” namespace. kubectl --namespace=storageos exec storageos-cli-899d4d47-hxqdr -- storageos get volumes --namespace=pgo |
-
-
- You should get output similar to the following:
-
1 2 3 4 5 |
NAMESPACE NAME SIZE LOCATION ATTACHED ON REPLICAS AGE pgo pvc-2115e4dc-7c06-4ab6-8d63-7e83b13bfa3b 1.0 GiB rke-ondat-demo-worker-node-4 (online) rke-ondat-demo-worker-node-4 2/2 1 hour ago pgo pvc-8d4f5388-cb88-4f8a-9ebd-ac421301a807 1.0 GiB rke-ondat-demo-worker-node-5 (online) rke-ondat-demo-worker-node-5 2/2 1 hour ago pgo pvc-1e2c0153-1470-4f56-9801-b28461e38092 1.0 GiB rke-ondat-demo-worker-node-1 (online) rke-ondat-demo-worker-node-1 2/2 1 hour ago pgo pvc-cf869899-8660-49af-bc52-d1d5c490f1d1 1.0 GiB rke-ondat-demo-worker-node-3 (online) rke-ondat-demo-worker-node-3 2/2 1 hour ago |
-
-
- Copy the name of the first volume; in our example, it is
pvc-2115e4dc-7c06-4ab6-8d63-7e83b13bfa3b
. Now, examine the volume using the Ondat CLI using the following command:
- Copy the name of the first volume; in our example, it is
-
1 2 |
# Describe the volume. kubectl --namespace=storageos exec storageos-cli-899d4d47-hxqdr -- storageos describe volume pvc-2115e4dc-7c06-4ab6-8d63-7e83b13bfa3b --namespace=pgo |
-
-
- This will give you a great deal of information about the volume, including the amount and location of the replica volumes. The output will look like this:
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
Master: ID 7afa9ef5-d5ed-4cdd-a58d-947703fd2c80 Node rke-ondat-demo-worker-node-4 (685ad0b4-8db7-42c2-923e-dcca2722743b) Health online Topology Domain 1 Replicas: ID a80362f1-a410-4d8c-a963-3399f3ac93bf Node rke-ondat-demo-worker-node-5 (410d90d9-836f-44b7-9e78-3a4f60babe01) Health ready Promotable true Topology Domain 2 ID dc3c8ec1-de9a-45ff-ab5a-319875189654 Node rke-ondat-demo-worker-node-3 (84cba793-1a54-40f6-b7e9-b4e6dd0db9d9) Health ready Promotable true Topology Domain 3 |
-
-
- This output tells you that each volume replica is deployed on a different node to ensure data protection and high availability if a node experiences a transient failure. This demonstrates how Ondat ensures that data is replicated to avoid any single point of failure. So, now our data is nicely replicated, and our database is working well, let’s try to break it. Failures are rarely subtle, so let’s go big and delete the Kubernetes node where our master volume resides.
-
- Using the output of our volume description, we can see which node the master volume is located on; in our case,
rke-ondat-demo-worker-node-4
. Now we’ve found our master volume, we will terminate the node on it with extreme prejudice using the following command:
- Using the output of our volume description, we can see which node the master volume is located on; in our case,
-
1 2 |
# delete the node with a master volume. kubectl delete node/rke-ondat-demo-worker-node-4 |
-
-
- When
rke-ondat-demo-worker-node-4
goes offline, Ondat will automatically detect that the master volume doesn’t exist anymore and elect one of the two replica volumes to become the new master volume. It will also create a new replica on a different node to keep the defined replica volume count specified in theondat-replication-encryption
config of theStorageClass
created.
- When
-
- Run the following command to see how Ondat has managed the failure:
-
1 2 |
# Get the volumes in the `pgo` namespace. kubectl --namespace=storageos exec storageos-cli-899d4d47-hxqdr -- storageos get volumes --namespace=pgo |
-
-
- You should see output similar to the following:
-
1 2 3 4 5 |
NAMESPACE NAME SIZE LOCATION ATTACHED ON REPLICAS AGE pgo pvc-2115e4dc-7c06-4ab6-8d63-7e83b13bfa3b 1.0 GiB rke-ondat-demo-worker-node-3 (online) rke-ondat-demo-worker-node-5 2/2 1 hour ago pgo pvc-8d4f5388-cb88-4f8a-9ebd-ac421301a807 1.0 GiB rke-ondat-demo-worker-node-5 (online) rke-ondat-demo-worker-node-5 2/2 1 hour ago pgo pvc-1e2c0153-1470-4f56-9801-b28461e38092 1.0 GiB rke-ondat-demo-worker-node-1 (online) rke-ondat-demo-worker-node-1 2/2 1 hour ago pgo pvc-cf869899-8660-49af-bc52-d1d5c490f1d1 1.0 GiB rke-ondat-demo-worker-node-3 (online) rke-ondat-demo-worker-node-3 2/2 1 hour ago |
-
-
- We have the same number of volumes, with the exact same names, but the main difference is that you will see the location of
pvc-2115e4dc-7c06-4ab6-8d63-7e83b13bfa3b
is now based on a different node. To investigate further, let’s describe the same volume from earlier using the Ondat CLI:
- We have the same number of volumes, with the exact same names, but the main difference is that you will see the location of
-
1 2 |
# Describe the same volume again. kubectl --namespace=storageos exec storageos-cli-899d4d47-hxqdr -- storageos describe volume pvc-2115e4dc-7c06-4ab6-8d63-7e83b13bfa3b --namespace=pgo |
-
-
- We will see the following in the output:
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
Master: ID dc3c8ec1-de9a-45ff-ab5a-319875189654 Node rke-ondat-demo-worker-node-3 (84cba793-1a54-40f6-b7e9-b4e6dd0db9d9) Health online Topology Domain 3 Replicas: ID a80362f1-a410-4d8c-a963-3399f3ac93bf Node rke-ondat-demo-worker-node-5 (410d90d9-836f-44b7-9e78-3a4f60babe01) Health ready Promotable true Topology Domain 2 ID ebe46d45-72f2-41ee-8b30-f48b2aa62778 Node rke-ondat-demo-worker-node-1 (c15320f6-2948-4a79-9599-4f06d7472f80) Health ready Promotable true Topology Domain 1 |
-
-
- You can see that Ondat automatically elected the replica volume on node
rke-ondat-demo-worker-node-3
to become the master volume since noderke-ondat-demo-worker-node-4
no longer exists in our cluster, and a new replica volume was created on noderke-ondat-demo-worker-node-1
to ensure that the replica volume count defined is consistent.
- You can see that Ondat automatically elected the replica volume on node
-
Ondat’s Encryption At Rest
Alongside ensuring out-of-the-box encryption of data in transit using Mutual TLS (mTLS) authentication, Ondat provides the capability of being able to encrypt data at rest for volumes. This is essential for any self-managed DBaaS as it ensures that data is secured without needing to trust that the applications themselves are encrypting data held in the database.
-
-
- Encryption at rest can be enabled by default for all volumes by adding a feature label called
storageos.com/encryption=true
to your OndatStorageClass
parameters, or on a per-volume basis using thePersistentVolumeClaim
manifest for your application. This offers the cluster operator the flexibility of having default data encryption at rest or leaving it to applications to decide, depending on the needs of the platform.
- Encryption at rest can be enabled by default for all volumes by adding a feature label called
-
- To verify that data encryption at rest is working as expected, we will attempt to access some data on an encrypted volume to check it is encrypted. First, we’re going to find which node one of our Ondat encrypted volumes, named
cluster1-repl1
, is located using the following command:
- To verify that data encryption at rest is working as expected, we will attempt to access some data on an encrypted volume to check it is encrypted. First, we’re going to find which node one of our Ondat encrypted volumes, named
-
1 2 |
# Get the node location where “cluster1-repl1” pod is running. kubectl get pods --namespace=pgo --output=wide | grep "cluster1-repl1" |
-
-
- This gives us the following, indicating the volume is on the node:
-
1 2 3 |
`rke-ondat-demo-worker-node-1`: cluster1-repl1-574bdbf868-95lch 1/1 Running 0 140m 10.42.1.10 rke-ondat-demo-worker-node-1 <none> <none> |
-
-
- Next, run a privileged container on that node with the
strings
utility to give us the ability to try and read data from a volume that is located on that node. Use the following command to run the pod and install the utilities:
- Next, run a privileged container on that node with the
-
1 2 3 4 5 |
# Use “kubectl debug” to temporarily run a privileged container on a node `rke-ondat-demo-worker-node-1`. kubectl debug node/rke-ondat-demo-worker-node-1 -it --image=ubuntu:latest # install “binutils” in the container to access the “strings” utility. apt update && apt install --yes binutils |
-
-
- Now, we can use this pod to try and read data from the underlying node. Since it is privileged, it should have full permission to read data from the node, normally, a surefire way to read data from pods running on that node.
-
- Run the following commands to attempt accessing the
cluster1-repl1
deployment data that is stored as blob files on a node under the/var/lib/storageos/data/
directory:
- Run the following commands to attempt accessing the
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
# Navigate to where data is being stored as blob files on the node and list the files. cd /host/var/lib/storageos/data/dev1/ ls -lah total 1.6G drwxr-xr-x 2 root root 4.0K Oct 14 18:13 . drwxr-xr-x 4 root root 4.0K Oct 14 18:13 .. -rw------- 1 root root 31M Oct 14 18:19 deployment.1aa18fce-6da2-4346-8dc6-4a7fdd4480f0.0.blob -rw------- 1 root root 31M Oct 14 18:19 deployment.1aa18fce-6da2-4346-8dc6-4a7fdd4480f0.1.blob -rw------- 1 root root 90M Oct 14 18:23 deployment.28da746d-1cb6-4775-a1e9-fa2b8d7c0809.0.blob -rw------- 1 root root 90M Oct 14 18:23 deployment.28da746d-1cb6-4775-a1e9-fa2b8d7c0809.1.blob -rw------- 1 root root 90M Oct 14 18:56 deployment.a74344a4-65e3-45b2-b2b7-d7eafe11dc1d.0.blob -rw------- 1 root root 90M Oct 14 18:56 deployment.a74344a4-65e3-45b2-b2b7-d7eafe11dc1d.1.blob -rw------- 1 root root 98M Oct 14 18:25 deployment.ebe46d45-72f2-41ee-8b30-f48b2aa62778.0.blob -rw------- 1 root root 98M Oct 14 18:25 deployment.ebe46d45-72f2-41ee-8b30-f48b2aa62778.1.blob # Use the “strings” utility to attempt to read the data in the blob files. # The output of the command will return multiple strings of random, unreadable characters. strings deployment.* | head -10 B*hJ "G7+ BiCM m}Q] 8|nk k?>JS ` :>. Ndz[ Tawg r`pJ # Exit from the container. exit |
As demonstrated in the test above, you can see that despite having privileged access to the underlying node, we are unable to read any data from an Ondat volume as it is encrypted. From a black hat or white hat perspective, attackers will be unable to read the data even if they have gained access to nodes running the DBaaS we’ve created. Thanks to Ondat’s native data encryption at rest, they are unable to decrypt the volume data without the encryption keys.
Conclusion and Tidying Up
This brings us to the end of our tutorial on creating a DBaaS with RKE and Ondat.
Before you leave us to top up your tea, coffee or other delicious drink, follow these instructions to tear down the resources used in this tutorial. DigitalOcean is keenly priced, but if you have finished, then it’s best not to run up charges by leaving your cluster running. Use the following commands in the current directory of the Terraform configuration files to remove the resources created during this tutorial:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
# Delete the database cluster. kubectl --namespace pgo delete --filename=../../workloads/percona-postgresql/cr.yaml # Delete the PostgreSQL operator. kubectl --namespace pgo delete --filename https://raw.githubusercontent.com/percona/percona-postgresql-operator/main/deploy/operator.yaml # Delete the “ngo” namespace. kubectl delete namespace pgo # Delete the “storageos-cli” deployment. kubectl delete deployment storageos-cli --namespace storageos # Remove Ondat from the cluster. helm uninstall ondat --namespace storageos # Delete the local path provisioner. kubectl delete --filename https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.21/deploy/local-path-storage.yaml # Destroy the environment created by Terraform terraform destroy -auto-approve |
Please feel free to get in touch to learn more about Ondat. We look forward to releasing future tutorials outlining how Ondat can benefit your Kubernetes strategy.