Deploy a MEAN Web App with Google Kubernetes Engine, Portworx

In one of my previous articles, I introduced Portworx as the container-native storage platform. In this tutorial, we will deploy and manage a Node.js web application and a MongoDB database in Google Kubernetes Engine. To achieve high availability for MongoDB, we will use a Portworx storage cluster deployed on GKE.
Launching a GKE Cluster
Let’s launch a three node GKE cluster based on Ubuntu OS with an SSD-based disk of 50GB attached to each node. Replace the PROJECT with your own GCP project id.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
export PROJECT=’<Your GCP Project ID>’ gcloud container --project $PROJECT clusters create "tns-demo" \ --zone "asia-south1-a" \ --username "admin" \ --cluster-version "1.11.7-gke.4" \ --machine-type "n1-standard-4" \ --image-type "UBUNTU" \ --disk-type "pd-ssd" \ --disk-size "50" \ --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \ --num-nodes "3" \ --enable-cloud-logging \ --enable-cloud-monitoring \ --network "default" \ --addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard |
The below command updates kubeconfig with the credentials and endpoint of the cluster.
1 2 3 |
gcloud container clusters get-credentials tns-demo \ --zone asia-south1-a \ --project $PROJECT |
Let’s add the current user to role cluster-admin
1 2 3 |
kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user $(gcloud config get-value account) |
Verify that the cluster is up and running.
Installing Portworx Storage Cluster
Portworx is installed as a DaemonSet on each node of GKE. We can install it by generating the YAML spec through an online tool. Visit the Portworx documentation page to get started.
Get the version of Kubernetes with the following command. The installation tool needs to know the exact version of the distribution.
1 |
kubectl version --short | awk -Fv '/Server Version: / {print $3}' |
Portworx relies on etcd to store the metadata and cluster state. For this demo, we will use the built-in etcd cluster.
Under the Storage tab, choose GKE and populate the information for the spec. We are choosing an SSD disk with 20GB as dedicated block storage for Portworx. Since we have three nodes, we will get aggregate storage of 60GB.
Choose defaults for the Network tab and click Next.
In the last tab, choose Google Kubernetes Engine and click Finish.
We are ready to install Portworx based on the generated specification. You can either download the spec or copy it.
Switch to the terminal and run the command copied from the spec generator. It will take a few minutes for Portworx cluster to get installed.
Verify the cluster by checking the Portworx Pods running in kube-system namespace. All the Pods should be running.
Deploying MongoDB
We will create a StorageClass for Portworx with a replication factor of 3. This ensures that the data is redundantly available on multiple nodes.
1 2 3 4 5 6 7 |
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: px-sc provisioner: kubernetes.io/portworx-volume parameters: repl: "3" |
1 |
kubectl create -f px-sc.yaml |
With the StorageClass in place, we will create a Persistent Volume Control (PVC) that will be used by MongoDB Pod.
1 2 3 4 5 6 7 8 9 10 11 12 |
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: px-mongo-pvc annotations: volume.beta.kubernetes.io/storage-class: px-sc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi |
1 |
kubectl create -f px-mongo-pvc.yaml |
The storage backend for MongoDB Pod is now ready.
Let’s go ahead and create the MongoDB Pod.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: db labels: name: mongo app: todoapp spec: strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate replicas: 1 template: metadata: labels: name: mongo app: todoapp spec: schedulerName: stork containers: - name: mongo image: mongo imagePullPolicy: "Always" ports: - containerPort: 27017 volumeMounts: - mountPath: /data/db name: mongodb volumes: - name: mongodb persistentVolumeClaim: claimName: px-mongo-pvc |
1 |
kubectl create -f db-pod.yaml |
To make the database Pod accessible to the web application, we will expose it through a ClusterIP-based Service.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
apiVersion: v1 kind: Service metadata: name: db labels: name: mongo app: todoapp spec: selector: name: mongo type: ClusterIP ports: - name: db port: 27017 targetPort: 27017 |
1 |
kubectl create -f db-svc.yaml |
The database backend is now ready. It’s time to deploy the web application.
Deploying Node.js Web Application
The web application is a simple todo task list that persists changes to MongoDB. Create the Deployment for the web app with replicas.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
apiVersion: apps/v1 kind: Deployment metadata: name: web labels: name: web app: todoapp spec: replicas: 3 selector: matchLabels: name: web template: metadata: labels: name: web spec: containers: - name: web image: janakiramm/todo ports: - containerPort: 3000 |
Finally, we will expose the web application through a load balancer.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
apiVersion: v1 kind: Service metadata: name: web labels: name: web app: todoapp spec: selector: name: web type: LoadBalancer ports: - name: http port: 80 targetPort: 3000 protocol: TCP |
Check the Service created for the web app to get the IP address of the load balancer.
Accessing the web app shows the below UI.
In the next part of this series, I will show you how to perform failover of MongoDB database running within GKE.
Portworx is a sponsor of The New Stack.
Feature image via Pixabay.