How Google Turned Kubernetes into a Control Plane to Manage GCP Resources

Almost a year ago, I wrote an article highlighting the transformation of Kubernetes into a universal control plane. The cloud native community has been making steady progress in that direction.
The maturity of Custom Resource Definitions (CRDs) made it possible to bring external resource management into the Kubernetes fold. The Virtual Kubelet project from Microsoft attempted to bridge the gap between the Kubernetes control plane and external resource schedulers such as IoT Hub and Container Instances. KubeVirt enabled the orchestration of VMs through Kubernetes scheduler and controller.
Google is making a big push of making Kubernetes the front and center of Google Cloud Platform (GCP). Its hybrid cloud strategy based on Anthos revolves around Kubernetes. Migrate for Anthos moves and converts workloads directly into containers that run in Google Kubernetes Engine (GKE).
Even though Config Connector is designed for GKE, it can be easily installed in any Kubernetes environment.
With so much investment in Kubernetes Engine and related products, Google wants GKE to be the preferred management layer for both cloud native and traditional operations. It is slowly but steadily moving towards making Kubernetes the front and center of GCP operations and management. Config Connector is a recently launched addon to Kubernetes to make GCP resources first-class citizens in the cloud native world: Check out my tutorial from last week where I demonstrated how to install and use Config Connector to manage GCP resources from Minikube.
Even though Config Connector is designed for GKE, it can be easily installed in any Kubernetes environment. I could use Minikube running on my dev machine as the control plane to configure and provision a Cloud SQL instance in GCP.
While other cloud providers such as Azure and AWS are using Open Service Broker API to connect cloud resources to Kubernetes, Google has deprecated it in favor of Config Connector.
Config Connector takes advantage of CRDs to register custom objects that map to a variety of GCP resources. Each GCP service such as Cloud Spanner, Cloud SQL, Cloud Pub/Sub is exposed as a custom resource definition that can be treated like any other Kubernetes object. The familiar kubectl tool can be used to manipulate these objects.
The way Config Connector takes advantage of GCP primitives such as service accounts combined with Kubernetes primitives of role-based access control (RBAC) and secrets is fascinating. The steps below explain the workflow involved in registering GCP resources with Kubernetes:
- An IAM Service Account with role Owner is created in GCP
- The Service Account key (a JSON file) is registered with Kubernetes as a Secret
- Config Connector is installed as a set of CRDs in a dedicated Kubernetes namespace
- A new namespace that matches the name of the GCP project is created in Kubernetes
- GCP resources mapped to CRDs are defined in a YAML file and created through kubectl
- Additional Roles and Role Bindings in Kubernetes may be created to allow or restrict access to GCP resources
- If a resource depends on other resources, they can be referenced in the YAML definition
The IAM Owner role and the associated Service Account key provide the required permissions for external applications to talk to GCP. When the key is registered with Kubernetes as a secret, CRDs use it to access the API associated with GCP resources. This secret is the critical link that acts as the conduit between Kubernetes and GCP control planes.
Google created a set of CRDs that map to key GCP services such as Cloud Storage, Cloud SQL, Cloud Spanner, BigQuery, and even GKE. The YAML file that registers the CRDs can be downloaded and deployed in any Kubernetes cluster through kubectl create command.
We can list all the registered CRD with the below command:
1 2 3 |
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true |
Google enforces a convention to match the Kubernetes namespace with the GCP project id. If you don’t want to create a new namespace, you can also annotate an existing namespace which enables Config Connectors to create objects in the designated namespace.
Once all the above steps are completed, we can create YAML files that contain the definitions of GCP resources. The below YAML creates a Cloud SQL instance called storedb-instance-1 in US-Central region. You can see how parameters such as region, and tier are passed from the YAML file.
Similar to how Kubernetes objects are modified with kubectl apply, GCP resources can also be updated. For example, you can modify the YAML file to change the zone of the SQL database instance and apply the new definition. This action calls the corresponding Cloud SQL API to move the database to the new zone.
Once the database instance is up and running, we need to create a user to access the database. Config Connector allows objects to specify dependencies by referring to existing resources. The below YAML file creates a DB user for the above Cloud SQL instance.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.