Cloud Services / Kubernetes

Anthos: Kubernetes Infrastructure to Make Developers More Productive

11 Apr 2019 11:19am, by

It’s an on premises version of a cloud platform that’s suddenly being treated as the heart of Google Cloud. One of the earliest Google employees, senior vice president for technical infrastructure Urs Hölzle, calls it “the future of cloud” that makes “hybrid and multicloud the new normal.” Instead of talking about GCloud as a cloud “built for Kubernetes,” Aparna Sinha — who leads the Kubernetes and GKE product team at Google — told The New Stack it’s now “a cloud built for Anthos.”

But what is Anthos and why is Google getting into on-premises hardware?

Kubernetes Is Hard

Whether it’s edge computing for sites where connectivity just isn’t available or regulatory issues or the sheer volume of data that keep a particular workload on their own servers, plenty of organizations are going to keep on running their own hardware. Hybrid cloud is all about getting the advantages of the public cloud while doing that, by reducing or removing the infrastructure management work.

Putting Kubernetes, Istio and Knative in one stack on Intel hardware from familiar server vendors that uses VMware and vSphere is how Google trying to make a developer-first hybrid cloud that could extend to other clouds — with a varying level of consistency depending on how closely they adhere to Google’s technology choices.

There are plenty of hyperconverged and hybrid cloud options on the market already, but they’re often an infrastructure-first approach. The difference with Anthos (the new name for Google’s Cloud Service Platform), Google developer relations vice president Adam Seligman explained to the New Stack, is that the focus is as much on making developers more productive. Like Cloud Run and Cloud Code, two other new services Google unveiled this week at Google Next, it’s about getting them back to the interesting parts of development by taking care of infrastructure requirements. “Anthos is about raising the waterline for a company’s development environment so developers can float their boats onto it at that higher level,” he said.

A Full Kubernetes Stack

Although Google could potentially package up some of its cloud services to run on Anthos in the future, at the moment Anthos is all about delivering a Kubernetes stack that’s both portable and deeply integrated with Google Cloud. This is a different approach from, say, Azure Stack, which puts IaaS and PaaS services from Azure onto hardware for your data center.

As with Azure Stack, an IT team can buy hyperconverged Anthos hardware from the vendors they already deal with like HPE, Dell EMC and Lenovo that they can either run themselves or have managed for them, or they can use hardware they already have. They get integrations with VMware and Cisco management tools, and familiar system integrators who can help them set up and integrate Anthos with their existing infrastructure.

Once they’ve got all that, they get a software stack that is the equivalent of GKE on their own hardware, that they manage through GKE like their Kubernetes cloud clusters and deploy to using the same tools they use to deploy to GKE. For example, if you use Google’s Cloud Build CI/CD system, the new custom workers let you create pipelines in Anthos using GitHub Enterprise, BitBucket, Artifactory or GitLab to deploy on your own hardware the way you would in GKE.

The pieces that make up Anthos are GKE or GKE On-Prem, Anthos Config Management for managing Kubernetes and Istio policies like namespaces, resource quotas and RBAC, and the GCP Marketplace where you can find the same Kubernetes apps that run on GKE, like Jenkins or WordPress — and that you’re billed for through the same Google Cloud bill as GKE. There’s also a private catalog in beta for the marketplace where you can offer your own internal solutions.

All of this is managed from GKE, so you don’t have to think about operating and upgrading Anthos, maintaining the base OS images, deploying security patches or getting the configuration right; operating and upgrading Kubernetes can still be a significant burden. But enterprises will likely have to do some integration work with their own infrastructure too. You need VMware vCenter 6.5 to create the VMs that the cluster runs on, and F5 BIG-IP load balancers for layer 4 load balancing.

Google also announced the beta of a service to containerize applications without refactoring them.

“Anthos Migrate is a way to move monolithic legacy applications into GKE,” Sinha explained to us. That brings the same advantages as moving an application into any cloud Kubernetes service as a first step to modernizing it. “You get bin packing, you get greater utilization and lower cost, you get a DevOps-like workflow, so you have service management enabled on those applications you get away from OS patching and having to manage the VMs — because you’re in a container.”

The beta service only covers moving those containerized applications into GKE, but there will be more options in time. “It’s a one-step streamlined process of moving from on-premise in VMs to GKE on the cloud or from AWS in VMs to GKE; what we have on the road map is moving to GKE on-premise and other things like moving to Windows and so forth.”

Connected but Portable

There are a lot of different flavors of Kubernetes available, from open source Kubernetes that you deploy yourself, to managed Kubernetes services like GKE and Azure Kubernetes Service, to packaged offerings like RedHat Open Shift and VMware PKS. Redmonk founder James Governor suggests thinking of Anthos as something like Google’s version of Open Shift. It’s a full Kubernetes stack that can manage any Kubernetes cluster even on other clouds, although at a less integrated level.

As Brian Kelly, CEO of hybrid cloud platform CloudBolt notes, the deep integrations with GKE and other Google Cloud services mean that “the end result is, of course, to drive more business to Google’s cloud solutions.”

Even beyond registering your GKE On-Prem cluster with the Google Cloud Platform Console so you can use GKE as the management and control plane, while there are Prometheus and Grafana integration as well as Elastic search, Fluentd and Kibana add-ons, the Anthos monitoring of system level components has to connect to Stackdriver for monitoring and alerting.

Anthos uses Istio service mesh for traffic management, identity and access control for calling services, authentication and mTLS encryption between services, telemetry and observability for monitoring, and generating an audit trail — that’s managed by the Cloud Service Mesh component. For local authentication, GKE On-Prem works with an OIDC provider, which could be the Google Cloud Identity platform, or it could be Active Directory.

But the portability of applications and workloads isn’t tied to using that full Anthos stack only with GKE On-Premise. Depending on the level of conformance and consistency you need, Sinha told the New Stack, you could use any Kubernetes distribution and any cloud and use Anthos to manage that.

“It’s very much ‘I want to hedge my bets, I don’t want to take the risk of being locked into one cloud, typically that’s AWS. I want to spread my risk, I want to be able to have reliability across clouds, I want to have the choice to run in certain regions, I want to have higher availability and be closer to my customers and I want to be to make use of innovative services that are available in different clouds,’ Anthos allows you to do that. It’s a portability of workload, a portability of your application through the utilization of a consistent platform that runs across environments, and Kubernetes is that substrate, that consistent platform.”

Anthos Configuration Management is an extension of the Kubernetes API so you can deploy it on any cluster anywhere, she explained. “It provides the ability to set, according to policy what namespaces you want to have in that cluster, what RBAC roles you want to create, what permissions you want to have, what quota policies you want to have and do so hierarchically across your multiple cluster regardless of where they’re running. You can keep that definition and store that definition in a gitops-like workflow. The systems will auto-deploy your changes to all the enrolled clusters and the clusters locally do the enforcement of the policies.”

You could even combine Anthos with other Kubernetes management tools. “If you’re using Heptio and that means you’re using open source Kubernetes, you can still register that cluster with Anthos and Anthos will work at a higher level, allowing you to do workload distribution, policy distribution and multicluster management. Or you can decide to use the custom management piece and then you wouldn’t be using Heptio and you’d get more management from Google — but it’s your choice.”

If that’s still a little confusing, it’s because this is an enormously ambitious proposition with multiple options, because Google is attempting to integrate with the entire Kubernetes ecosystem as its route to winning over enterprises.

Red Hat and VMware are sponsors of The New Stack.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.