Tutorial: GitOps in Multicluster Environments with Anthos Config Management

In the third part of this series, we will use GitOps-style deployment to push workloads across all registered clusters through Anthos Config Management (ACM).
GitOps encourages maintaining configuration-as-code and environment-as-code in a central source code repository. This gives us a chance to version control configuration and environments along with source code.
Since Kubernetes uses YAML or JSON files for specifications, it becomes easy to combine these artifacts with code.
Google built a tool called Config Sync which acts as the bridge between an external source code repository and the Kubernetes API server. Anthos Config Management is based on Config Sync to extend it to multicluster scenarios.
In this tutorial, we will use a GitHub repository that acts as a single source of truth for deployments and configuration. A component of ACM is installed into each of the registered Anthos clusters to monitor the external repository for any changes and synchronizing them with the cluster.
ACM ensures that all the clusters have the same state as defined by the specifications in the repository.
ACM supports a structured or unstructured repository for configuration. A structured repo will have a hierarchy for namespaces and cluster-wide resources. An unstructured repo can be used to maintain ad hoc configuration from one or more YAML file consisting of multiple Kubernetes objects. Unstructured repositories are helpful when you want to expand Helm charts and apply it via ACM.
Let’s get started with the steps to implement GitOps with ACM.
Installing the Configuration Management Operator
The Config Management Operator is a controller that manages the installation of Anthos Config Management in a Kubernetes cluster.
We need to install this operator in all the three clusters — GKE, EKS, and AKS.
Download the YAML spec for the operator from Google Cloud Storage and apply this to each cluster.
1 |
gsutil cp gs://config-management-release/released/latest/config-management-operator.yaml config-management-operator.yaml |
1 |
kubectl apply -f config-management-operator.yaml |
Make sure you run this command on all the clusters.
Verify the creation of the operator with the below command:
1 |
kubectl describe crds configmanagements.configmanagement.gke.io |
Configuring Clusters for ACM
Google ships a nifty utility called nomos that can be used to manage ACM. Download and add it to the path. The below commands work on macOS.
1 |
gsutil cp gs://config-management-release/released/latest/darwin_amd64/nomos nomos |
1 2 |
cp ./nomos /usr/local/bin chmod +x /usr/local/bin/nomos |
Now is the time to connect the GitHub repository with ACM. We will use a sample repo from Google Cloud documentation. Feel free to explore the structure of the repo.
Since we are using a public repo, we can set the secretType to none.
Create the below YAML file for each cluster by replacing the clusterName with the registered clustered name in Anthos.
To get the names of registered clusters, run the below command:
1 |
gcloud container hub memberships list |
Apply the below spec on each cluster. Make sure that the clusterName is accurate.
1 2 3 4 5 6 7 8 9 10 11 12 |
apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: # clusterName is required and must be unique among all managed clusters clusterName: git: syncRepo: https://github.com/GoogleCloudPlatform/csp-config-management/ syncBranch: 1.0.0 secretType: none policyDir: "foo-corp" |
1 2 |
kubectx gke kubectl apply -f config-management-gke.yaml |
1 2 |
kubectx eks kubectl apply -f config-management-eks.yaml |
1 2 |
kubectx aks kubectl apply -f config-management-eks.yaml |
If the command succeeds, Kubernetes updates the Config Management Operator on each cluster to begin syncing the cluster’s configuration from the repository. To verify that the Config Management Operator is running, list all Pods running in the config-management-system namespace:
1 |
kubectl get pods -n config-management-system |
The pods listed above are responsible for monitoring the repo and applying the changes to the Kubernetes API server.
We can use the nomos utility to check the status of the clusters. Run the below command to see if all the clusters are in sync.
1 |
nomos status |
Exploring the Clusters and Repo
The foo-corp repo includes configs in the cluster/ and namespaces/ directories. These configs are applied as soon as the Config Management Operator is configured to read from the repo.
All objects managed by Anthos Config Management have the app.kubernetes.io/managed-by label set to configmanagement.gke.io.
List namespaces managed by Anthos Config Management:
1 |
kubectl get ns -l app.kubernetes.io/managed-by=configmanagement.gke.io |
Switch the context to each cluster and check the namespaces. All of them have the same configuration.
The namespaces shown here are defined in the GitHub repo under the foo-corp/namespaces folder.
We will test what happens when one of the clusters deviates from the configuration specified in the repo.
Let’s delete the audit namespace in one of the registered clusters.
1 |
kubectx eks |
1 |
kubectl delete namespace audit |
The namespace gets deleted immediately, but within a few seconds, it becomes visible.
1 |
kubectl get ns audit |
With ACM, Anthos can ensure that each registered cluster has the desired state of the configuration by constantly syncing it with the ACM repo. In many ways, this mimics the same workflow as the Kubernetes controller maintaining the desired number of replica sets at any given time.
Anthos Config Manager is one of the core building blocks of Anthos that enables centralized configuration of cluster states.
In the next part of this series, we will explore how to configure Amazon EKS cluster for deploying “click to deploy” Kubernetes apps from the GCP Marketplace. Stay tuned!
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.