The true test of openness — for any technology, any organization, even any group of people —transcends how to be inclusive, and embraces how best to be included. The people who fashion technologies, and the messages around them, typically build their subject matter into the stuff that epochs are made of. But it takes genuine wisdom for the people behind a technology to find a reasonable way to be a part of something bigger than the technology itself.
Unquestionably, this year’s principal, U.S.-based OpenStack Summit in Boston was scaled back from last year’s affair in Austin. So while some may have missed the live musicians who kept the Austin show lively and jovial last year, this year it was time to get serious. It was a splash of cold water, but it was necessary.
In our preview article last week, we raised three key questions that would be the focus of our inquiries here in Boston this week. As promised, here are those questions again, and what we learned in response to them:
1. Has Kubernetes really become the de facto application service management component for OpenStack? Not quite yet, especially because this ended up being not exactly the way this question should have been asked to begin with.
OpenStack perceives applications and services on separate layers. Kubernetes has become the most attractive means for user applications to be managed on an OpenStack resource. There continues to be some interest in building independent container managers for OpenStack — for instance, the Zun project, and a very new project from a Juniper Networks engineer called the App Infra API. You’ll read more about those here in The New Stack in the coming days.
Yet it appears Kubernetes has won the affection of a majority of OpenStack’s engineers… but not really for the reasons we were first told.
“There are other options. Docker Swarm and some of those other ecosystems have been able to achieve good growth, and good prospects,” AT&T Senior Principal Cloud Engineer Amit Tank told me Tuesday. He was referring to a very serious plan to solve the notorious complexity of OpenStack installation and management on multiple hardware types, by containerizing OpenStack and setting up Kubernetes to set it up and manage it, from one layer beneath.
“However, we see a very clear trend there of the mindshare that Kubernetes has gained,” Tank continued, “especially because of the fact that it is an open source project, and it works very well in doing a few things. It doesn’t try to do too many things. It tries to do a few things, but very well, like declarative computing and the ability to handle the scheduling of containers in a very, very simple manner, without assuming that this is a Docker container. You can use rkt just as well. And I think that value proposition that Kubernetes brings along, made it a very compelling option for us to consider, as a way to describe those [network] charts in a relationship that can then be run as OpenStack services.”
Tank and colleague Kandan Kathirvel have come up with a way to declare the components of an OpenStack network as a chart, which acts as a kind of template for Kubernetes. They perceive one of the key benefits Kubernetes offers as a way to declare and instantiate the same service, in an homogenous way, on every system. One of the hang-ups even OpenStack’s most ardent supporters have with that system, is having to configure it differently for the specific needs of the infrastructure on which it runs. If Kubernetes can abstract away that underlying infrastructure, making every machine on which Kubernetes is installed look like the very same machine… that entire problem vanishes into the vapor of tech history.
So why doesn’t Kubernetes simply take over the whole infrastructure management space? OpenStack engineers could provide a list of answers, but the chief answer is really the only one that matters. As Rackspace distinguished engineer Adrian Otto reminded attendees Wednesday, “Most people don’t realize that today, container orchestration systems do not have multi-tenancy in their network. It just does not exist, and probably will not exist for some time.”
2. Does OpenStack deserve its reputation on the street as being difficult to deploy and manage? The penultimate, if not ultimate, solution being considered or implemented by multiple OpenStack engineering groups — AT&T, Verizon, Red Hat, and Rackspace among them — involves steering the Magnum component in the direction of managing orchestrator-driven environments — principally Kubernetes — as resources. Some are now actively advocating the “club sandwich” approach we introduced last week, where two Kubernetes layers provide the underlying support for OpenStack below and the application support for tenants above.
All this to diffuse the lingering question, which OpenStack officials openly acknowledged during the keynotes Monday, of how to make installation and management of OpenStack’s multiple components simpler. So… evidently, yes.
“This is the summit of managed OpenStack,” said Jason Venner, Juniper Networks’ vice president of architecture and technical marketing, during a session on Thursday.
“I’ve been in a lot of cloud projects,” he continued “where the time from the initial, ‘I want to do this OpenStack project,’ to real first-customer production workloads, can take years of sorting through all the hardware, physical infrastructure, and operating issues. And with managed OpenStack, we see people up and running inside of a day.”
Which brings us directly to our third question from last week:
3. Who leads OpenStack? It has been said before (on more than one occasion, I was the one who said it) that any collaborative project needs a champion. Rackspace, of course, was the original OpenStack champion. But analysts and some journalists kept hoping for a “real” vendor, not some cloud company— even though OpenStack is a cloud technology.
The traditional expectation is always for tradition, or in its absence, something that vaguely resembles it. In June 2016, analyst group Gartner named HPE a leader in private cloud, citing the Helion OpenStack product as a key contributor. Not quite two months later, HPE began reorganizing that group. In October, it laid off staff; and in November, it sold its OpenStack unit to SUSE.
As I directly heard from several attendees this week, they expected SUSE to represent OpenStack with an HPE level of marketing muscle, and it didn’t. But that’s not really what SUSE does. SUSE did participate, in full, in important panels and in Tuesday’s Interop Challenge. It was present in every way that an open source development organization needed to be.
And yet I still heard opinions that SUSE was not HPE. People charged with making purchasing decisions, or evaluating technology investment options, are looking for champions that act like vendors. Some traditions never die.
Meanwhile, the leaders at developing OpenStack met together to do what fewer tech conferences give people the time or space to do: confer.
On Wednesday, Jay Pipes, the director of engineering at Mirantis, convened a session on the ground floor to discuss the intricate details of embracing Kubernetes. It was an open forum, like an Agile scrum for a hundred people, where everyone had an open opportunity to speak their minds without raising their hands… everyone except a certain media guy, who this time was compelled to shut his trap and listen.
“What I’m hoping is some Kubernetes experts are in the room, and we can take a few notes, and correct me if I’m wrong,” said Pipes in calling the forum to order. “Because I’ve been basically digging through Kubernetes docs, trying to figure out a few things… I want to be able to see what we can learn from Kubernetes, sort of best practices, and how resources are represented, requested, managed, and take those lessons into Nova and Placement [API].”
The Kubernetes engineers have adopted a very different terminology than the OpenStack engineers, for resources that, as they’re interfaced with one another, will turn out to be the same things. One wants to be fair about deciding which terms survive, except that the logic behind some of Kubernetes’ choices — for example, calling the things that resources spawn “resources” — baffles Pipes and the OpenStack folks. As APIs are sewn together, and components request resources (or something-or-rather) from each other, it’s vitally important that everyone speak the same language.
This is what open source truly is: fearlessly embracing the incompleteness of software and the efforts behind it.
Taking the lead in this discussion, from the audience, was Kevin Fox, a contributor to OpenStack’s Kolla component. It’s effectively OpenStack’s containerized deployment option, and Fox has been leveraging it to deploy OpenStack on top of Kubernetes. In fact, Fox may have been an inspiration to AT&T.
At one point, Fox explained one of Kubernetes’ architectural distinctions that he finds fascinating: a way to build a pod description, include it in a Kubernetes manifest YAML file, and trigger the instantiation of another Kubernetes instance. “It’s incredibly useful,” Fox explained to Pipes. “You can use it to build Kubernetes out of Kubernetes!
“In OpenStack-speak, you use Nova Compute to launch Nova API,” Fox continued, analogously. “And then you use Nova API to deploy all the rest of OpenStack.” No, that’s not how OpenStack really works, but that’s the equivalent of how it would work if it worked like Kubernetes. It’s a self-replicating, self-bootstrapping behavior, and it played into Pipes’ discussion topic because it means the thing being bootstrapped (OpenStack itself, if things turn out right) won’t be just a resource provider but a resource.
With a Red Hat, a Google, a Huawei, a SUSE, a Verizon, and a Mirantis engineer among the dozens in the room sitting down and taking notes, it looked as though Kevin Fox were a true OpenStack leader. Maybe his company is an OpenStack leader too. Fox is an admin at the Pacific Northwest National Laboratory, the clean energy research arm of the U.S. Department of Energy.
Who leads OpenStack? The fellow who stands up and claims the lead, that’s who.
Photos by Scott Fulton.