How Google Cloud Run Combines Serverless with Containers
One of the most exciting launches from Google Cloud Next 2019 is Cloud Run, a serverless environment based on containers and Kubernetes.
Cloud Run comes across as a clusterless and serverless container execution environment. Some attendees even compared it with AWS Fargate and Azure Container Instances. But Google Cloud Run takes a different approach than the other serverless container platforms.
Let’s take a closer look at this technology.
The Knative Connection
At last year’s Next event, Google announced Knative, an open source platform built on top of Kubernetes and Istio. The project is developed in collaboration with IBM, Red Hat, Pivotal, and SAP.
At a high level, Knative adds an abstraction layer that simplifies the workflow involved in deploying code to Kubernetes.
Since Kubernetes is an infrastructure layer, developers work with the operations team to deploy and scale apps. Knative removes the ops layer by enabling developers to directly target Kubernetes.
Knative has a set of building blocks for building serverless platform on Kubernetes. But dealing with it directly doesn’t make developers efficient or productive. While it acts as the meta-platform running on the core Kubernetes infrastructure, the developer tooling and workflow is left to the platform providers.
Knative has three core elements:
- Build: This component is responsible for generating a container image from source code.
- Serving: This component simplifies exposing an app without having to configure resources such as ingress and a load balancer.
- Eventing: Eventing provides a mechanism to consume and produce events in a pub/sub style. This is useful for invoking code based on external or internal events.
The key advantage of using Knative is scale to zero where the runtime automatically terminates and schedules pods based on the inbound traffic. If there are no active clients, a microservice may automatically get shut down and also scale when the traffic goes up.
Knative can be deployed on any Kubernetes cluster. It acts as the middleware bridging the gap between core infrastructure services and developer experience.
Cloud Run — Google’s Own Implementation of Knative
Google is one of the first public cloud providers to deliver a commercial service based on the open source Knative project. Like the way it offered a managed Kubernetes service before any other provider, Google moved fast in exposing Knative through Cloud Run to developers.
Cloud Run is a layer that Google built on top of Knative to simplify deploying serverless applications on the Google Cloud Platform.
Knative has a set of building blocks for building serverless platform on Kubernetes. But dealing with it directly doesn’t make developers efficient or productive. While it acts as the meta-platform running on the core Kubernetes infrastructure, the developer tooling and workflow is left to the platform providers.
Cloud Run is an abstraction layer on top of Kubernetes and Knative that makes the platform accessible to developers.
What Problem Does Cloud Run Solve?
Though containers have become the de facto standard for packaging code and Kubernetes for deploying the apps, there are multiple steps involved in the workflow.
From the time the developer commits the code to access the app, she has to go through the below workflow:
- Create a Docker file with all the dependencies and installation steps,
- Build the container image from the Docker file,
- Push the image to the container registry,
- Create a Kubernetes YAML file for the Deployment with the container,
- Create another YAML file for exposing the Deployment as a Service,
- Deploy the Pod and Service,
- Access the App through the endpoint.
Google Cloud Run shortens the path between committing the code and accessing the app. With the help of tools such as Cloud Code, Skaffold and Jib, developers can write code, test it locally and deploy the code to Kubernetes without the intervention of DevOps teams.
Flavors of Cloud Run
Currently in beta, Google Cloud Run is available as a standalone environment and within the Google Kubernetes Engine (GKE).
Developers can deploy apps to Cloud Run through the console or CLI. If there is a GKE cluster with Istio installation, apps targeting Cloud Run can be easy deployed to an existing Kubernetes cluster.
Each deployment to a service creates a revision. A revision consists of a specific container image, along with environment settings such as environment variables, memory limits, or concurrency value.
Requests are automatically routed as soon as possible to the latest healthy service revision.
Cloud Run — PaaS Done Right
My initial observation is that Cloud Run delivers the same promise as the original PaaS. The fundamental difference between PaaS and Cloud Run lies in transparency. Since the underlying layer is based on Knative, every step can be easily mapped to the functionality of Istio and Kubernetes.
When Cloud Run becomes available on GKE On-Prem and Anthos, customers will be able to target a consistent platform with a repeatable workflow to deploy modern applications.
In one of the upcoming articles, I will cover the steps to deploy apps on Cloud Run. Stay tuned.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar to learn how to use Azure IoT Edge.