For KubeCon Europe last month, industry observer Joseph Jacks pulled together a list of over 60 Kubernetes installers and services. This wealth of variation that made itself known as the conference, happily, kicked off a conformance effort to ensure that users get a consistent experience. I’m a strong believer that clear conformance builds ecosystems and have deep experience working on that from my OpenStack DefCore efforts.
In short, conformance is not a vendor issue: it’s a user experience and ecosystem issue.
We need to ensure that applications are portable across both installations and versions of Kubernetes.
To help explain my position, I put together a short Q&A:
What’s going on here?
A wealth of ops-centric tools is a reality because, frankly, it’s very hard to have both an easy day-one start and long-term day-two and heterogeneity support in the same tool. That use case dichotomy means we will need multiple tools.
What about an official installers like kubeadm?
I think that kubeadm could help with consolidating K8s config in a way that can be shared, and I’m in favor of that; however, I see the real issue is helping K8s fit into the broader operations environment. For that, tools like Ansible are very popular and so solve real problems. It’s not an either/or problem — production deployments will have both layers of tooling.
Is there a tool that uses more standard Ops tools?
Kargo is a very good implementation of deployment best practices with workable high availability and an upgrade model. Our open hybrid automation project, Digital Rebar, drives it in many configurations without modification so that we get benefits from community and can pass improvements back upstream. We could also drive kubeadm, Kops or others but feel like Kargo is the most portable and complete right now.
Should these install efforts be in or outside of the project governance?
Regarding in versus out of the project, it’s nuanced. I’d rather have none in; however, having something like Kargo “in” sends a “collaborate here” message that may be helpful. The challenge is getting vendors to give up their own efforts and participate in the community versions. That can only work if the install scripts are flexible for many environments.
We are seeing that the Kubernetes community is moving faster than any single vendor.
Ultimately, that means that installer scripts are not differentiated. This can be a hard position for vendors who are looking for an advantage. We are seeing that the Kubernetes community is moving faster than any single vendor.
What’s Kubernetes role in data centers? Will it become the data center operating system?
My perspective is that Kubernetes is part of a data center operations environment, not the only thing.
At my company, RackN, we use Digital Rebar to build the underlay and then overlay the Helm package manager, Deis workflow software or other tools on top. I think the community should put pressure on efforts that 1) have a “part of a whole” mindset and 2) encourage sharing/reuse of tooling approach. Those two ideas are a cornerstone to build good site reliability engineering operations for Kubernetes.
What do you think? I’d love to hear your position.
Feature image via Pixabay.