Google continues to refine its Kubernetes open source container orchestration engine for ever-larger workloads. Version 1.3, released Wednesday, includes fresh capabilities such as the “federated” ability to manage pods across different networks (including both cloud and on-premises deployments), and support for stateful applications.
“We live in a multi-cloud reality, with businesses deploying applications and data across on-prem and public clouds. Businesses want the flexibility to respond to changing customer and business environments,” said David Aronchick, a Google Cloud Platform senior product manager for Google Container Engine and Kubernetes, in an e-mail. “By offering cluster federated services, Kubernetes is now taking the first steps to offering true portability and flexibility for enterprises.”
The cross-cluster federated service allows services to span more than one cluster, even remote clusters. This can be a big help in terms of improving service reliability, and even disaster recovery operations.
The federation capabilities would allow organizations to set up multiple clusters in different availability zones, allowing the to remain operational even during times of regional, or data center outages.
“With a single command, they can join each cluster to a federated API, allowing a user to deploy a service across multiple clusters simultaneously,” Aronchick explained. “When deploying a new service, each cluster can create its own load balancer appropriate for its environment, further simplifying administration.”
Part of this feature is cross-cluster service discovery allowing containers to “consistently resolve to services irrespective of whether they are running partially or completely in other clusters,” according to a Google blog post detailing the features.
Stateful Apps At Last
Kubernetes 1.3 is also the first release to support properly stateful applications, such as those using databases or key-vale stores. The move to support stateful workloads is a major one for the orchestration software, as most workloads today tend to involve state in some form.
“Every application that uses stateful storage has had to store data in a way that lasts beyond the life of a single resource, be it a container, VM or persistent disk, to avoid having a single point of failure,” Aronchick said.
To date, there have been plenty of Kubernetes plug-ins to connect stateless services to stateful applications such as MySQL, PostgreSQL, and Zookeeper. But the Kubernetes development team wanted to implement a more integrated approach.
Stateful support is done through a new object called PetSet. It supports, for instance, permanent hostnames that persist across restarts, eliminating the need of getting a new hostname for each restart, and then updating the entire system with the new address.
PetSet can also recognize initialization containers. “An initialization container runs once at startup, allowing for actions such as leader election, copying data or sharing identities with other servers in a group,” Aronchick explained. An initialization container can prepare data or recover state from a previous restart as part of a normal startup service.
Also, the PetSet can provision permanent space on disks for containers that are not being used.
What Else is in the Box?
Other notable features of Kubernetes 1.3 include:
- Increased scale and automation: It is now easier to autoscale clusters up and down while doubling the maximum number of nodes per cluster, according to Google. The software now can keep track of up 2,000 nodes per cluster, twice the old limit.
- MiniKube, a Kubernetes learning tool: A command-line tool for firing up a cluster on a laptop, with a full API compatibility with Kubernetes. Good for local testing.
- Updated user interface dashboard: The dashboard now shows the majority of activity with clusters, allowing users to create, edit and control all workload resources.
The company also announced some upgrades to its hosted, fully managed version of Kubernetes, Google Container Engine (GKE). Most notably, the company has integrated into GKE Google Cloud Identity & Access Management (IAM) roles, allowing administrative access.
“Centralized Identity & Access Management is one of the cornerstones to a security strategy,” Aronchick said. “By providing a centralized way to manage roles, audit users and restrict behavior, organizations can limit their total surface area and increase their security protection. ”
GKE now recognizes solid state drives as well as the ability to run different machine types across multiple zones with a feature called NodePools, which opens up a way to customize clusters to specific workloads.
Also notable is that Kubernetes 1.3 now offers full support for the CoreOS’ rkt container format, in addition to Docker’s. “The community now has the flexibility to choose from supported runtimes in Kubernetes based on what is best suited to the particular needs of an architecture, site, or deployment,” said Wei Dang, head of product, in an e-mail.
CoreOS, which offers a commercial edition of Kubernetes called Tectonic, has been doing a lot of work on Kubernetes, particularly around adding authentication and authorization for Kubernetes APIs, which allow for finer-grained controls around managing access to individual resources.
For keeping track of containers, Kubernetes 1.3 also uses the latest version of the CoreOS-develop etcd, version 3 which came with production-ready scaling enhancements, including a set of distributed coordination primitives such as distributed locks, elections, and software transactional memory.
CoreOS is a sponsor of The New Stack.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.