Google’s CSP Config Management Handles Cluster Sprawl with Configuration-as-Code
Last week, Google introduced Cloud Services Platform (CSP), its combination of Google Kubernetes Engine (GKE), GKE On-Prem, and a new management tool called CSP Config Management, that brings a managed Kubernetes experience to the hybrid cloud. The platform allows enterprises that aren’t yet ready to make the leap to the cloud use Kubernetes in much the same way they might with any number of hosted solutions.
CSP Config Management is one of the tools that binds together these various environments from a configuration perspective and now CSP product manager John Murray has offered further details in a blog post on exactly how this beta tool works to help you “strengthen security and maintain compliance across all your clusters, while still helping developers move fast.”
At the core of CSP Config Management lies the ability to “create a common configuration for all your administrative policies and apply it to all your clusters, at the same time,” and the blog post offers a bullet point review of the tool’s key offerings. These include central management of Istio and security policies, a quick startup time with “a multicluster namespace with common RBAC policies and other access control rules”, compliance enforcement “by preventing configuration drift through continuous monitoring of the cluster state,” and the addition of source control to your Kubernetes configuration with Git.
In an email interview with The New Stack, Murray explained that CSP Config Management offers the “streamlined multicluster management” needed for enterprise-level Kubernetes.
“As a Kubernetes deployment grows, individually managing clusters increases the likelihood of error, and leaves room for inconsistent policy enforcement across the fleet. For example, if an admin needs to roll out a policy change across all clusters and it isn’t applied to a certain subset, that creates a security risk. CSP Config Management provides a single pane of glass for administrators to see where policies are applied,” writes Murray. “Additionally, CSP Config Management’s tight integration with a version control system gives users access to collaboration tools, auditability, and transactional changes that can be easily reverted. This level of transparency and collaboration is not possible when making changes through command-line tooling that lives on a single user’s machine, where any changes made are opaque to the rest of the organization.”
CSP Config Management offers a configuration-as-code experience with the native Kubernetes configuration format (YAML or JSON) rather than a GUI interface, which Murray sees as the right solution for the problem. Rather than building a custom interface, customers can use the tools that they are already familiar with as part of the Git ecosystem.
“It’s meant to be centralized in code, which we believe is the right format for thinking about configuration. Code is portable, it’s flexible, and it can be read by both humans as well as automated processes like linters,” writes Murray. “The reason we chose Git as the administrative interface is that there is a whole ecosystem of GUIs and tooling that helps people manage their code today. Plugging CSP Config Management into that ecosystem allows our customers to store their configurations on-premises, or in any number of cloud-based services like GitHub or Google’s Cloud Source Repositories.”
By treating configuration as code, Murray explains, CSP Config Management provides enterprises with a way to manage cluster sprawl and quickly handle outages, unintended changes, and other unexpected circumstances.
“Using a central source of truth to manage these clusters ensures you don’t need to manually apply the same changes thousands of times. With our system, those changes roll out automatically with a single commit,” writes Murray. “If there’s an internet outage at a single store, the cluster will just pick up the latest configuration once the connection is restored. If someone happens to accidentally make a local change to one of your thousands of clusters, or a cluster misses an update because someone tripped over a power cable and had to plug it back in, it will be brought back in line automatically. And if you have an issue with the latest config updates, you can roll your entire fleet back to the last healthy state with a single command.”