Modal Title
Cloud Native Ecosystem / Microservices

Tutorial: Deploying Microservices to Knative Running on Google Kubernetes Engine

In this tutorial, we learn how to deploy a data-driven web application as a Knative service that talks to a stateful MongoDB pod.
Nov 8th, 2019 8:06am by
Featued image for: Tutorial: Deploying Microservices to Knative Running on Google Kubernetes Engine

In the last article, I introduced Knative as the platform layer of Kubernetes. In the next part of this series, let’s take a closer look at Knative Serving which brings a PaaS-like experience to Kubernetes.

We will deploy two services: a stateful MongoDB service and a stateless web application written in Node.js. While the stateful database backend runs as a typical Kubernetes deployment, the stateless frontend will be packaged and deployed as a Knative service that enjoys capabilities such as scale-to-zero.

Setting up the Environment

We will launch a Google Kubernetes Engine (GKE) cluster with Istio addon enabled. This is a prerequisite for Knative.



The above commands will enable the appropriate Google Cloud Platform (GCP) APIs.

Let’s launch a GKE cluster.



The above step will add the current user to the cluster-admin role.

You should now have a three-node GKE cluster with Istio preinstalled.

Installing Knative on GKE

Knative comes as a set of Custom Resource Definitions (CRD). We will first deploy the CRDs followed by the rest of the objects.



After a few minutes, Knative Serving will be ready. Wait until you see that all deployments in the knative-serving namespace are ready.

Deploying MongoDB on GKE

We will launch a single instance of a MongoDB database with the volume configured as an emptyDir. In production, you may want to use GCE Persistent Disks or a more robust storage solution like Portworx.


Let’s expose the database pod through a ClusterIP service.



Deploying Web Application as a Knative Service

Let’s package the Node.js frontend as a Knative service and deploy it.


If you are interested in looking at the source code, clone the Github repo.


This results in a Knative service exposed via Istio ingress. Let’s explore this further.


A Knative service automatically gets translated into a Kubernetes pod and service.

Accessing the Web Application

Knative services are exposed via the ingress associated with the service mesh. Since we are using Istio, the service can be accessed via the ingress gateway.

The below commands will help you get the public IP address of the ingress gateway.



Since the routing happens through the HTTP host header, we can simulate it by adding an entry to the /etc/hosts file. The IP address reflects the ingress gateway of Istio.


Hitting the URL in a browser shows the web app.

Exploring the Service Further

Accessing an app deployed as a Knative service is no different from other Kubernetes workloads.

The key advantage of taking the Knative Serving route is to get the benefits of auto-scale with no additional configuration.

After a period of inactivity, Knative Serving will automatically terminate the pods which free up cluster resources. The moment the service is accessed, a new pod automatically gets scheduled. Similarly, when there’s a spike in the traffic, additional pods automatically get launched.

You can see this in action by watching the pods. Approximately after a minute, Knative Serving will kill an inactive pod. Refreshing the browser will result in the creation of a new pod.


It took three seconds for Knative to spin up a new pod to serve the pod. After 60 seconds of inactivity, the same pod gets terminated.

Summary

Knative Serving brings PaaS experience to Kubernetes by enabling developers to deploy and scale container images without dealing with the underlying primitives.

In this tutorial, we have seen how to deploy a data-driven web application as a Knative service that talks to a stateful MongoDB pod.

Knative Eventing is one of the two building blocks of Knative. In the next tutorial, I will walk you through the steps involved in integrating Google Cloud Pub/Sub with event-driven applications running in Kubernetes. Stay tuned.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.

Photo by Michael Jasmund on Unsplash.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.