Google Container Engine Quickly Integrates Kubernetes 1.8
With the release of Kubernetes 1.8, the announcement of integration with Google Container Engine followed quickly. Kubernetes, of course, originated at Google but was donated as seed technology for the newly formed Cloud Native Computing Foundation in 2015. Its continued close work with the open source project makes it Johnny-on-the-spot for integrating the new version.
“This is a big release for both the open source project and GKE,” said Tim Hockin, principal software engineer, Kubernetes and Google Container Engine (GKE).
“For people running on Container Engine, we’ve got a lot of features around hybrid enterprise functionality, security, and we’re starting to see Google Cloud supporting containers and Kubernetes as a first-class citizen. We’re going all the way to the cloud APIs.”
It’s touting added automation among the GKE features.
Node auto-repair is in beta and opt-in. It uses the Kubernetes Node Problem Detector to repair nodes and clusters.
“Before this release, Container Engine always hosted your master for you, so you didn’t have to worry about administering that or upgrading your masters. But it was still a user-driven operation to upgrade your nodes,” Hockin said.
“We found this was a soft spot for people. They didn’t quite know how to do it or were intimidated by the process. So if you opt-in, we will manage your nodes for you. You’ll give us a maintenance window, and we will upgrade your nodes [then] and make sure you’re running the most up-to-date, most patched version of Kubernetes.”
Node auto-upgrade is generally available and opt-in.
“From our experience with Kubernetes, we’ve built up this knowledge about what it means when a machine is malfunctioning. With the auto-repairs process, we’ll kick off the automatic replacement of machines that are faltering for some reason, which eliminates the vast majority of transient problems people are experiencing, so you won’t have an outage because you ran out of disk space, there was some file system corruption or things like that. We’ll just tear down your containers and restart them on a new machine and replace that machine underneath you. And you don’t have to worry about it,” he said.
It’s also addressing container networking, with features such as IP aliases, which is in beta. Aliased IPs are available for new clusters only; support for migrating existing clusters will be added in an upcoming release.
“This is our first salvo into plumbing container concepts all the way down to the infrastructure.
“…Alias IPs give us the ability to reserve container IP addresses all the way down to the GCP (Google Cloud Platform) infrastructure. Now it’s a first-class thing, it’s not just the side effect of the APIs we were previously using. This is not very user impactful, but very system impactful, because it means we now can integrate more completely with all that GCP is offering,” he said.
For example, cloud-hosted products that are going to use GCP’s VCP-peering feature will work with IP aliases, which they couldn’t before. I think you’re going to see this is just the first step in which a lot of APIs adapt to containers across Google and across other cloud providers. I think all the cloud providers see the writing on the wall: They have to make containers a thing their APIs understand.”
With native support for Spark in v1.8, Spark on Kubernetes can now communicate with Google BigQuery and Google Cloud Storage as data sources and sinks from Spark using bigdata-interop connectors.
Two features soon coming to alpha include multi-cluster ingress and shared VPC support.
Multi-cluster ingress allows you to spin up multiple GKE clusters in different regions, using Google Cloud load balancing to always get the best latency and performance by using the nearest cluster. You can run Kubernetes masters and nodes in up to three zones within a region to ensure high availability. You can sign up for the early tests of shared VPC (virtual private cloud) by multiple projects within your organization.
Customers also can sign up for an alpha test of Nvidia’s Tesla P100 GPUs inside a containerized application on Google’s cloud.
Ubuntu node image also is now generally available, providing users a choice between using it and Google’s managed image.
And the node allocatable feature, which protects node components from out-of-resource issues, is generally available. It provides a better accounting of what’s actually happening on the machine, so you don’t end up over-committing machines, Hockin said.
Google Cloud and CNCF are sponsors of The New Stack.
Feature image via Pixabay.