Rancher 2.2 Brings Greater Control to Kubernetes
Rancher 2.2 is now generally available, aimed at simplifying Kubernetes for developers and IT staff and giving them more control over their applications.
“In 2018 more companies moved from just dev/test to production with Kubernetes. More of them are getting comfortable with the idea of Kubernetes in production. But how do you manage it? How do you support it? How do you ensure clusters are highly available, have multitenancy,” said Ankur Agarwal, Rancher Labs’ head of product management.
“We saw this desire for highly resilient clusters, highly available apps, multitenancy and supportability. So in 2.2, that’s what we’ve addressed.”
Work on the new release has been underway for more than a year. The previous version, Rancher 2.1, was introduced in October 2018.
The company previously announced support for multicluster applications and for Prometheus monitoring.
Among the new features in 2.2:
- Disaster recovery of etcd clusters. Users can perform scheduled and ad hoc snapshots of etcd from the Rancher UI, API, or the Kubernetes API, writing to local storage, mounted shared storage, or any S3-compatible object store.
Though designed for resilience, if enough go down, the cluster can become unresponsive. Now with just a few clicks, you can see the history of all the backups, then you can decide which backup to use. You can also schedule your backups. You can just schedule it and forget about it, Agarwal said.
- Global DNS provides public access to applications deployed in multiple clusters or multiple projects within the same cluster by automatically programming a hostname of services to public DNS. This release includes support for Route53 and AliDNS, with alpha support for CloudFlare. Support for additional providers is under development.
Say you want to build some kind of high-availability use case across different availability zones. If you have replicas of the same app running, you can program your DNS so it points to these applications with just a few clicks. You can do it globally once and let it manage and maintain it for you.
- Multi-tenant catalogs enable users to isolate catalogs by cluster or project, providing granular isolation so that even the app names cannot be shared across projects.
“Kubernetes has a namespace concept for catalogs, which is good, but it’s not pure isolation because people can still see other projects,” Agarwal said.
“With [catalogs of ] all these Helm charts, you can say you don’t want these teams to see each other’s applications,” he said. “If you have a catalog in one cluster, you can decide whether it’s visible to other clusters. With a multi-tenant catalog, you can do it at a project level — whatever Helm charts you’ve created for your own project. Can also just point to a Helm repository that you have.”
Rancher Labs has been on a roll lately with Kubernetes-related announcements. Most recently it launched Submariner, an open-source project enabling direct networking between pods in different Kubernetes clusters.
In December, it announced a partnership with Arm to bring Kubernetes management to Arm-based clusters running on edge and data center nodes. And last month it launched the open source project k3s, a lightweight Kubernetes distribution geared toward resource-constrained environments like IoT installations.
In November, it announced availability for its Kubernetes platform on China’s three largest cloud providers.
The Cloud Native Computing Foundation, which manages the Kubernetes project, is a sponsor of The New Stack.
Feature image by Momentmal from Pixabay.