Google Container Engine Now Speaks Kubernetes 1.7
Google is touting its quick integration of Kubernetes 1.7, released at the end of June, into Google Container Engine. The turnaround time isn’t that surprising, considering Google was the original creator of the Kubernetes open source orchestration engine and still one of the primary contributors to the open source project.
Major enterprise customers such as eBay, The New York Times and Philips are driving more maturity and focus on enterprise-grade security, extensibility, networking and hybrid networking features in both Kubernetes and Container Engine, according to Aparna Sinha, Google’s group product manager for Kubernetes and Container Engine.
It’s also announcing the results of engineering work to provide a better developer experience with Container Engine.
The Google-curated Container-Optimized OS (COS) and its team of site reliability engineers that continuously monitor and manage the Container Engine clusters provide a high level of foundational security, she said. In addition, new features add to secure multitenancy, which is important for organizations with many different teams sharing the cluster.
“It’s important to be able to isolate those teams so they don’t overlap or attack each other,” she said.
Among the new features:
- The Kubernetes NetworkPolicy API, which allows users to control which pods can communicate with each other.
- The node authorizer beta restricts the level of threat to the surface of the node. Each kubelet can only access the objects scheduled for that node, helping to protect a cluster from a compromised or untrusted node.
The network policy API’s network isolation and node authorizer’s resource isolation between nodes provide enhanced security for multitenancy when combined with the centralized control over cluster resources provided role-based access control (RBAC), introduced in the 1.6 release.
Many large companies that are using Container Engine want to connect their cloud and on-prem applications. The new features for hybrid networks support that, she said. They include:
- Support for all private IP (RFC-1918) addresses, which allows access across cloud and private clusters.
- Internal load balancing, in beta, allowing Kubernetes and non-Kubernetes services to access one another on a private network.
Google is responding to the Kubernetes community’s call for improved extensibility, through new features such as API Aggregation, which has moved to beta
“Users like the Kubernetes API — it’s very well designed. But they often have custom APIs and other solutions that they want to add and manage the same way they do the Kubernetes API. Some users have built a PaaS solution on top of the Kubernetes API and they want to bring in additional objects out of their PaaS … API Aggregation allows you to do that without having to reinstall or restart your software,” she said.
Dynamic Admission Control, available in alpha clusters, allows customers to integrate custom business logic and third-party solutions. She called the Istio open source project, which adds networking, security and monitoring capabilities for microservices, as a cutting-edge example of the types of solutions that can be brought with Dynamic Access Control.
StatefulSet Updates, a new beta feature, allows automated updates of stateful applications such as Kafka, Zookeeper and etcd, using various strategies such as rolling updates.
Support for graphics processing units (GPUs) has been in high demand for machine-learning workloads, according to Brian Grant, principal engineer for Kubernetes and Container Engine. GCE now supports NVIDIA K80 GPUs in alpha clusters for experimentation and will support other GPUs in the future.
To help improve developer efficiency, GCE has added an auto-upgrade beta capability of the cluster itself that doesn’t threaten the uptime of the application itself or the data in stateful applications, he said. It incorporates API Pod Disruption Budgets at the node layer, allowing the user to control the rate of disruption to an application. It works on cloud-native data stores like Amazon S3 as well as legacy stores such as MySQL.
Another feature freeing developers from worrying about infrastructure is auto-repair, in beta, which monitors for unhealthy nodes and repairs them automatically. Container Engine also offers cluster- and pod-level auto-scaling.
The Container Engine UI enables users to visualize their workloads. It’s designed for the developer’s point of view, rather than just showing infrastructure resources, Grant said.
It shows information such as workload type, running status, namespace, and cluster, annotations, labels, the number of replicas and status. All views are cross-cluster.
Going forward, Sinha said she expects further work to be done on hybrid environments. More work will be done in the next few releases on auto-scaling. On pod scaling, there’s demand to scale those based on custom metrics related to the application, such as a number of connections, and for monitoring of applications in the cluster.
Expect to see more support for GPUs, more tooling around machine-learning workloads and better integration with other services.
“For developers, it’s great that they don’t have to manage infrastructure, but we also want to get them the services they need… in an easier way, and in an easy-to-consume way,” Sinha said.
Feature image via Pixabay.