Cloud Native / Containers / Kubernetes

7 Ways Kubernetes Avoids an OpenStack-Like Hype Cycle

18 Dec 2017 12:05pm, by

Rob Hirschfeld
Rob Hirschfeld is CEO and co-founder of RackN, which offers orchestration software for the container-centric data center. He has been in the cloud and infrastructure space for nearly 15 years, from working with early ESX betas to serving four terms on the OpenStack Foundation Board.

Is Kubernetes in danger of an OpenStack-like hype cycle? No. We’re fully there! But I’ve got seven reasons it’s going to be just fine, why the hype is not moving ahead of the delivery.

As an industry, we love to play “red vs blue” games. I’m convinced that creating an OpenStack vs Kubernetes meme is a big mistake. OpenStack is about infrastructure and Kubernetes is about application delivery. If they should be highly synergistic, not competitive, then why do we keep going back to the same “vs” narrative?

KubeCon wrapped last week with clear evidence of explosive growth around Kubernetes related events, community and vendors — it’s something I’ve been labeling “peak bandwagon.” While there’s a compelling desire to compare this event to peak OpenStack events in the past few years, I do think this is a deceptive comparison. They are similar as open source efforts because both projects are fast growing, fueled by big vendors and full of promise; however, they have different market dynamics because they serve very different user communities.

The Kubernetes leadership and The Cloud Native Computing Foundation, which manages Kuberntaes, are making different strategic choices informed by both their own needs and watching other projects like OpenStack.

Any fast moving, hyper-scale open source project will have governance challenges that scare everyone involved. In that sense, the parallels are obvious: it’s hard to maintain contributors and protect the project integrity as more and more divergent interests show up. I served on the OpenStack Board for four terms (and I’m nominated for a 2018 position — so vote!); as an active leader who helped steer the project, my positions on these issues are well documented.

Let’s explore seven interconnected ways that Kubernetes is not retreading OpenStack’s history.

1.  A Focus on Applications Not Infrastructure

“Cloud Native Kubernetes” sounds like an oxymoron and it’s a very relevant point. All of the CNCF projects are application delivery focused. This means that they engage a very different “up stack” user community. These users are able to leverage common infrastructures like Amazon Web Services, Google Cloud Engine, or Azure as a starting point so there is minimal operational divergence when getting started. That means new users focus on USING instead of installing.

Ultimately, the unavoidable installation and operation challenge for OpenStack creates adoption friction. I know the challenges of getting the OpenStack infrastructure right first hand (see “Crowbar”). Our struggles creating repeatable underlay experiences led to the Digital Rebar API-driven provisioning technology that is the heart of RackN. Anything that relies directly on physical infrastructure (and everything does eventually!) adds significant complexity to community building.

2.  API Over Code and Early Conformance

Kubernetes was able to leverage the OpenStack Interoperability work (see my DefCore efforts) to quickly establish a certified API mark. While nascent, it sends a clear market signal that vendors are expected to respect the APIs in a portable way. That helps build both user confidence and vendor participating. Those, in turn, create the addressable market for an active ecosystem to pull in yet more users.

I also believe that Kubernetes is more willing to embrace APIs over code. One (unfortunate in my opinion) compromise in the OpenStack community was requiring that all OpenStack vendors use the same code base. I don’t think either project is at risk of forking; however, it sends the wrong message to participants when the specific code is required — the APIs are the interaction point for users, not the code. That said, I think Kubernetes is helped by the use of a single language, Golang, and NOT having multiple distribution sources.

3.  Kubernetes is an Ecosystem, not a Monolith

Kubernetes elders are determined to keep the project small and focused. They are happy to use the CNCF as a relief valve for related projects in Kubernetes orbit. The typical design discussion is to start opinionated (just Docker and GCE/AWS) then pull out generic APIs as the patterns and scope expand. This means the project gets smaller and decoupled over time.

Large projects face tremendous pressure to increase feature scope. This is why OpenStack kept adding “semi-core” service projects like a database, load balancer, UX and orchestration. While these are essential services to many users, they also create a tightly coupled monolith if management is coordinated. These are critical features, but they are not core to the infrastructure APIs. Decoupling them limits API convergence but it builds a critical ecosystem and allows them to innovate faster.

4.  No Big Tent, but a ‘Tailgate Party’ of Projects

CNCF’s loose governance approach can be confusing because there appears to be little organization or theme around the projects they are selecting for membership (listen to our recent podcast). They do not require collaboration between projects or common infrastructure; however, the projects do have a shared architectural approach. This light-weight governance (self-described as “Minimal Viable Governance”) does not create “in vs. out” thinking in the community because there is an only minimal expectation that projects integrate together. Instead, they are unified by often, but not always, being included in an application stack.

This approach is very innovation-friendly compared to OpenStack’s deprecated “Big Tent” experiment. How are they different? There’s no brand confusion created between Kubernetes and other CNCF projects. That means that users do not expect integrations between projects (see Open Infrastructure post).

5.  Wealth of Kubernetes-as-a-Service

Kubernetes started with “as a Service” offerings early on and the number of providers continues to grow quickly. There are several very positive benefits from service providers being active in the space. First, they make it easier for users to adopt. Second, they are very concerned about scale and operability of the code base. Third, and most critically, they drive the API to be consistent and portable. These instances provide “reference” implementations for the community that are encouraged to harmonize so they can compete on value-added features.

There are also risks from as-a-service offerings like black box operations and hidden forking of the code base. This was a significant challenge for OpenStack public clouds that was complicated by the bounty of private cloud vendors. Since the aaS vendors were slower to emerge and difficult to standardize, OpenStack users found themselves in custom installations instead of building portable infrastructure. This trend is slowly reversing.

6.  Strong Stewardship

Kubernetes has benefited from strong stewardship by Google. The deep talent, design validation and financial investment by Google drove Kubernetes during the critical momentum building phase of the project. The fact that Google does not directly compete with companies like Red Hat, CoreOS, IBM or Samsung made it safe for them join, and more importantly endorse, the project.  There is a danger for projects to have too much single vendor influence; however, Google also giving the right signals about stepping back and allowing key leaders to exit.

While OpenStack was launched by Rackspace and NASA, the degree of stewardship was much more limited by design. As part of the Dell team, I was part of the vocal group that pushed the community quickly into a multi-vendor landscape. While I found that collaborative environment empowering, it made the project frothy during critical incubation stages. In retrospect, I wish we’d been more technically opinionated.

7.  Competitors

Finally, Kubernetes benefited from being relatively late to the container scheduling world.  Docker, Mesos, Rancher, Apcera, Cloud Foundry and several others (anyone remember StackEngine?!) had more complete offerings initially. I remember Kubernetes as being an underdog with tepid commercial support (until Red Hat pivoted OpenShift) when Docker Swarm (v0.12) sent a shockwave through the community by integrating with the Docker Engine. This robust market allowed the project to mature without as much of a spotlight (target?) on it.

By contrast, OpenStack seemed to emerge fully formed with larger than life expectations and generous venture capital-funded marketing budgets. Reasonable open competitors did exist (CloudStack, Eucalyptus and OpenNebula) but the vendor hype machine around the project positioned OpenStack aggressively (yeah, I’m very culpable here). That burned up both technical runway and good will.  Now that Kubernetes seems to have “won” the container scheduler war, the honeymoon is clearly over.

In Conclusion

There’s no one right way to manage an open source project (hat tip to Anna Karenina). The best we can hope is that the good choices we make overcome the bad ones. While this post focuses on OpenStack’s challenges, we made a lot of great choices too and I’m optimistic about the new open infrastructure direction. Kubernetes is weaving its own path informed by those choices and its own needs. I support those choices so far. I hope that my seven points help you think more deeply about that path — I’d like to hear your opinion about what I got right and what I missed.

The Cloud Native Computing Foundation, Google Cloud EngineMicrosoft, and Red Hat were sponsors of The New Stack.

Feature image by Aman Bhargava on Unsplash.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.