TNS
VOXPOP
Will real-time data processing replace batch processing?
At Confluent's user conference, Kafka co-creator Jay Kreps argued that stream processing would eventually supplant traditional methods of batch processing altogether.
Absolutely: Businesses operate in real-time and are looking to move their IT systems to real-time capabilities.
0%
Eventually: Enterprises will adopt technology slowly, so batch processing will be around for several more years.
0%
No way: Stream processing is a niche, and there will always be cases where batch processing is the only option.
0%
Cloud Native Ecosystem / Kubernetes

OpenStack Summit Boston: Where Kubernetes Ends and OpenStack Begins

May 9th, 2017 1:04pm by
Featued image for: OpenStack Summit Boston: Where Kubernetes Ends and OpenStack Begins
Photos by Scott Fulton.

Why are OpenStack and Kubernetes interchangeable parts from some people’s perspectives, and potential companions in application management for others? The answer may lie in who’s asking: developers or system administrators.

“This is the club sandwich part: There are my Kubernetes and developers’ Kubernetes, and never the twain shall meet,” declared Eric Wright, a solutions engineer with Toronto-based Turbonomic, speaking this week at the OpenStack Summit in Boston. Wright is a network engineer, not a developer.

“I’m going to do things that are going to affect the Kubernetes environment. They’re going to do things that should affect their environment. Maybe they’ve got five different pools in which they want to deploy. They want to test the way their Kubernetes deployment is because they want to go multi-cloud. So five sets of Kubernetes on top of OpenStack, so that I can deploy a multi-cloud simulation, so when they go to GCP or whenever that they’re comfortable that the applications are going to work there.”

“Exposition and Consumption”

“Why combine the infrastructure-as-a-service and the application tier?” rhetorically asked Stephen Gordon, a principal product manager for Red Hat, speaking Monday afternoon during a breakout session at OpenStack Summit. “The way I think of it is in terms of exposition and consumption of resources.

“Traditionally, the Linux kernel has been responsible for taking CPU, disk, and memory, and exposing that to you for consumption through user space processes,” Gordon continued. “When I scale that out to a distributed system, I still need something to provision systems, and expose their resources. Those may be hardware or virtual, increasingly, when I think about software-defined networking, for example. Then Kubernetes is what allows me to have a translation layer that effectively communicates between the application and the underlying infrastructure, without my application itself having to be tied to that infrastructure.”

While Red Hat notices many of its customers running OpenShift ­­— Red Hat’s commercial distribution of Kubernetes — within virtual machines, including in the testing phase, Gordon said he believes this is a temporary state of affairs. He told attendees to pay close attention to Wednesday’s keynote sessions, where engineers are expected to demonstrate dividing what he described as the OpenStack “monolith” into still-useful projects and less useful, with the survivor likely being Kubernetes managing applications on bare metal, in conjunction with OpenStack Neutron and OpenStack Cinder.

“The model is a little bit in conflict with what Eric [Wright] was talking about ­­— the ‘sandwich,’” Gordon continued, “maybe a little bit different way of thinking. Maybe I’m not building a sandwich with Kubernetes / OpenStack / Kubernetes. Maybe I’m building a better compute tool which might be managed by something like Ironic, some of which is running OpenStack for the purposes of running VMs, some of which is running Kubernetes directly on bare metal, but potentially using some of those shared services to communicate in a complex application.”

Tomorrow’s Legacy

Turbonomic’s Eric Wright is an advocate of a network environment where the Kubernetes open source container orchestration engine is wrapped around every component capable of running an application, including a first-generation virtual machine and including bare-metal servers. All those components are served by a common control plane established and managed by OpenStack. Then, the appearance of a conventional application environment for running almost any kind of software, of any age, can be established above the OpenStack layer.

“You call it legacy. I call it production,” said Wright.

“We have to be very careful when we start throwing the word ‘legacy,’ because trust me,” he continued, “in two years, there’s gonna be, ‘Remember your containers’ legacy environments?’ Trust me, it’s coming.”

Wright was making a much deeper implication — one which seems to be based on extensive personal experience. It’s also a much more important revelation than the architectural barriers between OpenStack and Kubernetes. The architecture of an organization’s networked applications is like a canyon being carved by rivers. We may be talking about neither Kubernetes nor OpenStack three years from now. But the way the applications we make today will work at that time, will most likely need to be preserved by whatever platform supports them.

In the meantime, Wright gave three reasons why Kubernetes may be the right component for wrapping applications belonging to a OpenStack private cloud. As we mentioned earlier this week, reason #1 is that an organization does not have the spare hardware lying around to construct a new staging cluster for a production environment every time a developer project graduates from the testing phase.

“You are not using all of your infrastructure today. That is a fact,” he declared. “You’re probably using 15 to 20 percent of your CPU, if you’re lucky; you’re using about 90 percent of your memory. That’s where people really dig in hard. But there are a lot of resources that are unused in your environment. So why wouldn’t you give some of that out, to let Kubernetes do what it does — give that ability to developers to consume it.”

Thus, his second reason is to drive utilization, but particularly in those areas where utilization may be poorest. The third reason involves what he describes as an “on-ramp” that makes resources not just more easily consumable by users, but consumed in the more secure, preferred way that operators would prescribe.

“Then you become comfortable with both sides of that environment,” he went on, “because you know who’s using it, you know how to deploy it, and life is good. But we have a problem as an industry and as a community, because now we have two communities that we’re dealing with, and each community has its own challenges.”

It was a problem addressed later in the day by Rackspace distinguished architect Adrian Otto, during a panel discussion about — what else — the intersection between OpenStack and Kubernetes.

From left to right: OpenStack Foundation interop engineer Chris Hoge; Rackspace distinguished architect Adrian Otto; Comcast chief cloud infrastructure architect Jonathan Chiang; CoreOS training director Tony Campbell; Platform9 chief architect Bich Le.

“When you have a toy application, you can run it on anything you want,” said Otto. “Then when you have a real application that needs to manage infrastructure, you realize that container orchestration software — like [Docker] Swarm, Kubernetes, and [Apache] Mesos — are not designed to manage infrastructure to the extent that OpenStack is.”

Two years ago, Otto was the principal advocate for OpenStack’s Magnum component — part of the platform’s initial effort to extend itself into the emerging world of container management. At that time, he told us, containerized applications “need a dedicated service that has an API intended only for the exclusive use of containers.” Now, it appears, Kubernetes has stepped in to fill that role.

I asked Otto and the other members of the panel whether Kubernetes has actually relieved OpenStack of some of the responsibility for managing containerized applications.

“I for one am relieved,” Otto responded with a sigh.

“I founded a project called OpenStack Solum, which was pre-Kubernetes, and it was designed to try to address this gap. And it’s a tough gap to fill. I think that, if OpenStack expanded its ambitions to try to solve all of those issues, as well as all of the infrastructure-related ones, that focus would be diverted too much. I didn’t always feel that way, but I feel that way now, and I’m glad.”

Here are two principal open source components of the modern data center that are starting to make way for one another in distributed systems. What isn’t exactly clear just yet is which way or ways each one will cede to the other.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.