Tutorial: Deploying Microservices to Knative Running on Google Kubernetes Engine

In the last article, I introduced Knative as the platform layer of Kubernetes. In the next part of this series, let’s take a closer look at Knative Serving which brings a PaaS-like experience to Kubernetes.
We will deploy two services: a stateful MongoDB service and a stateless web application written in Node.js. While the stateful database backend runs as a typical Kubernetes deployment, the stateless frontend will be packaged and deployed as a Knative service that enjoys capabilities such as scale-to-zero.
Setting up the Environment
We will launch a Google Kubernetes Engine (GKE) cluster with Istio addon enabled. This is a prerequisite for Knative.
1 2 |
export CLUSTER_NAME=mi2-knative export CLUSTER_ZONE=asia-south1-a |
1 2 3 4 |
gcloud services enable \ cloudapis.googleapis.com \ container.googleapis.com \ containerregistry.googleapis.com |
The above commands will enable the appropriate Google Cloud Platform (GCP) APIs.
Let’s launch a GKE cluster.
1 2 3 4 5 6 7 8 |
gcloud beta container clusters create $CLUSTER_NAME \ --addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio \ --machine-type=n1-standard-4 \ --cluster-version=latest --zone=$CLUSTER_ZONE \ --enable-stackdriver-kubernetes --enable-ip-alias \ --enable-autoscaling --min-nodes=1 --max-nodes=10 \ --enable-autorepair \ --scopes cloud-platform |
1 2 3 |
kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user=$(gcloud config get-value core/account) |
The above step will add the current user to the cluster-admin role.
You should now have a three-node GKE cluster with Istio preinstalled.
Installing Knative on GKE
Knative comes as a set of Custom Resource Definitions (CRD). We will first deploy the CRDs followed by the rest of the objects.
1 2 3 4 |
kubectl apply --selector knative.dev/crd-install=true \ --filename https://github.com/knative/serving/releases/download/v0.9.0/serving.yaml \ --filename https://github.com/knative/eventing/releases/download/v0.9.0/release.yaml \ --filename https://github.com/knative/serving/releases/download/v0.9.0/monitoring.yaml |
1 2 3 |
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.9.0/serving.yaml \ --filename https://github.com/knative/eventing/releases/download/v0.9.0/release.yaml \ --filename https://github.com/knative/serving/releases/download/v0.9.0/monitoring.yaml |
After a few minutes, Knative Serving will be ready. Wait until you see that all deployments in the knative-serving namespace are ready.
Deploying MongoDB on GKE
We will launch a single instance of a MongoDB database with the volume configured as an emptyDir. In production, you may want to use GCE Persistent Disks or a more robust storage solution like Portworx.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
apiVersion: v1 kind: Pod metadata: name: db labels: name: mongo app: todoapp spec: containers: - image: mongo name: mongo ports: - name: mongo containerPort: 27017 hostPort: 27017 volumeMounts: - name: mongo-storage mountPath: /data/db volumes: - name: mongo-storage emptyDir: {} |
Let’s expose the database pod through a ClusterIP service.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
apiVersion: v1 kind: Service metadata: name: db labels: name: mongo app: todoapp spec: selector: name: mongo type: ClusterIP ports: - name: db port: 27017 targetPort: 27017 |
1 |
kubectl apply -f db-pod.yml -f db-service.yml |
Deploying Web Application as a Knative Service
Let’s package the Node.js frontend as a Knative service and deploy it.
1 2 3 4 5 6 7 8 9 10 |
apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: todo-app namespace: default spec: template: spec: containers: - image: janakiramm/todo |
If you are interested in looking at the source code, clone the Github repo.
1 |
kubectl apply -f todo-service.yaml |
This results in a Knative service exposed via Istio ingress. Let’s explore this further.
1 |
kubectl get kservice |
A Knative service automatically gets translated into a Kubernetes pod and service.
Accessing the Web Application
Knative services are exposed via the ingress associated with the service mesh. Since we are using Istio, the service can be accessed via the ingress gateway.
The below commands will help you get the public IP address of the ingress gateway.
1 2 |
INGRESSGATEWAY=istio-ingressgateway kubectl get svc $INGRESSGATEWAY --namespace istio-system |
1 |
export IP_ADDRESS=$(kubectl get svc $INGRESSGATEWAY --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}') |
Since the routing happens through the HTTP host header, we can simulate it by adding an entry to the /etc/hosts file. The IP address reflects the ingress gateway of Istio.
1 |
34.93.238.29 todo-app.default.example.com |
Hitting the URL in a browser shows the web app.
Exploring the Service Further
Accessing an app deployed as a Knative service is no different from other Kubernetes workloads.
The key advantage of taking the Knative Serving route is to get the benefits of auto-scale with no additional configuration.
After a period of inactivity, Knative Serving will automatically terminate the pods which free up cluster resources. The moment the service is accessed, a new pod automatically gets scheduled. Similarly, when there’s a spike in the traffic, additional pods automatically get launched.
You can see this in action by watching the pods. Approximately after a minute, Knative Serving will kill an inactive pod. Refreshing the browser will result in the creation of a new pod.
1 |
kubectl get pods --watch |
It took three seconds for Knative to spin up a new pod to serve the pod. After 60 seconds of inactivity, the same pod gets terminated.
Summary
Knative Serving brings PaaS experience to Kubernetes by enabling developers to deploy and scale container images without dealing with the underlying primitives.
In this tutorial, we have seen how to deploy a data-driven web application as a Knative service that talks to a stateful MongoDB pod.
Knative Eventing is one of the two building blocks of Knative. In the next tutorial, I will walk you through the steps involved in integrating Google Cloud Pub/Sub with event-driven applications running in Kubernetes. Stay tuned.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.
Photo by Michael Jasmund on Unsplash.