Modal Title
Cloud Services / Kubernetes

Google Cloud Expands Its Managed Kubernetes Service Anthos with Serverless, Service Mesh

Google Cloud continues to expand its Anthos Managed Kubernetes service with new features that aim to make cloud native computing easier to use for the enterprise.  A service mesh, based on the APIs of the open source Istio, has been added, and Google leveraged its Knative work to incorporate serverless jobs.
Sep 16th, 2019 4:06pm by
Featued image for: Google Cloud Expands Its Managed Kubernetes Service Anthos with Serverless, Service Mesh

Google Cloud continues to expand its Anthos Managed Kubernetes service with new features that aim to make cloud native computing easier to use for the enterprise. A service mesh, based on the open source Istio, has been added, and Google leveraged its Knative work to incorporate serverless jobs, in a feature called Cloud Run.

The company also beefed up Anthos Config Management with automation capabilities as well as the ability to enforce organizational policies.  Binary Authorization, for instance, serves as a checkpoint that could ensure only verified image make it into the build process.

With all of these services, Google assumes the responsibility for managing and upgrading the software, relieving this burden from its users. “Not every enterprise has the engineering bandwidth to integrate everything together,” said Eyal Manor, Google vice president engineering and product management.

With the service mesh, Google pledges to support that exact same APIs as the latest Istio release, which, company argues, frees the user from being “locked into” Google, given that they can rebase their application on another copy of Istio. Google Cloud, however, does offer some features that would be hard-to-enable on Istio alone, such as auto-scaling and end-to-end dashboard based visibility of the services being run on Google Cloud.

With Cloud Run, Google can run stateless workloads, with the option to scale the deployments down to zero whenever they are not needed. You package the event-driven code in a container, and that same serverless job can run within a private data center, on another cloud platform, or on Google Cloud. “It hugely simplifies the developer experience. You don’t have clusters or autoscaling or configuration, which can become hugely complicated with open source,” Manor said.

The configuration management features take on the creeping policy complexity that comes with more cloud native deployments. The service resembles that of GitOps approach, Manor said. A user makes a policy change — such as blocking developers from having the same access rights as administrators — the resulting configuration change, and the service will roll out that change to multiple clusters and all the relevant components.

One of the new features that caught the eye of Savinay Berry, OpenText‘s senior vice president of cloud services, was the service-to-service telemetry, which could help identify early-warning indicators. Today, KeyBank uses Kafka for an event bus for some of its web apps, but when something goes wrong, the team must manually debug where the issue is and what dependencies are involved. “Today, we are not instrumented that granular of a level,” McFee said.

Chris McFee, KeyBank‘s director of enterprise DevOps practices found the CloudRun/Knative news to be of interest for possible serverless jobs: “We don’t have a use case for that yet, but forward-looking that could be really interesting for the future.”

Feature image by _Alicja_ from Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.