TNS Makers: Kelsey Hightower on OpenStack’s Evolution and Intel’s Clear Containers
Ideally, the cloud is for everyone — offering a set of APIs for accomplishing what you want with the underlying compute, whether you’re a cloud provider, or running your own data centers. This is the world that OpenStack’s mission hints at, and for Kelsey Hightower, a core Kubernetes contributor, it’s a world that he sees drawing closer, thanks to Intel’s recently launched Clear Containers initiative.
In this episode of The New Stack Makers podcast, Hightower discusses his newest perspectives on OpenStack and containers with TNS Editor-in-Chief Alex Williams. They also delved into the promise of Clear Containers.
Williams is in Tokyo for the OpenStack Summit where he will be talking with Intel over a bento lunch session Tuesday. They will discuss OpenStack, open source and other factors that are contributing to what is making the cloud building environment increasingly viable for companies creating an underlying layer of compute that connects to a set of APIs. The lunch is free for all OpenStack attendees.
This podcast is also available on YouTube.
“When OpenStack started, its goals were to provide an EC2-like environment for anyone, anywhere,” Hightower said. “That world has evolved. Even Amazon now has their own container service — ECS and their container registry. OpenStack also needs to continue to evolve. The whole AWS platform has grown significantly over time. I think it’s safe to say that OpenStack hasn’t kept up with that.
“AWS has their own vision for what the cloud will be — a combination of IaaS and a bunch of SaaS offerings around that. Intel has the scale and the community to provide new advancements of what we’re starting to see a trend in infrastructure, which is containers, and cluster management tools like Mesos and Kubernetes. They’re going to do a lot of work to help educate people, and also contribute code where necessary, to make sure that everyone who wants to run a cloud-like environment has the ability to do so.
“Clear Containers was a really kind of clever move by Intel. You see a lot of demand for the usage of containers; then right when we saw that spike, that usage wanted to replace virtual machines. In some cases it can, but there’s one sticking point, which comes around multi-tenancy. If you’re running your own servers, you have less worry about someone else running the application side-by-side that may be doing something malicious. You still want to be careful about the software you run. When it comes to true multi-tenancy, that means you’re going to have to be able to support arbitrary workloads from outsiders. In that case, the security needs to be where we’ve reached with virtual machines. Even though that’s not a hundred percent secure, there is a pretty solid story around that, as proven by most cloud providers.”
Intel’s development makes it possible to readily boot a specialized Linux distro that can run a Docker image in a container, and thereby create, according to Hightower, “this world where containers run inside specialized VMs, and immediately take advantage of all the advances in virtual machines that we have.”
So, does Clear Containers help solve the one-host problem in container management? Hightower offers a qualified “yes.”
Yes, “… if you have a good virtual machine management platform around KVM, which is the primary implementation of Clear Containers; if you continue to use your cross-host networking solutions — your store solutions; and, allow Docker containers to be married into that world. But I see that as a temporary solution.
“How do we take the new world and integrate it with the old world, to allow people to keep doing what they’re doing — especially the way people are packaging containers today, where in many cases you’ll see almost an entire operating system inside of these containers? It makes sense to envision them to look and work like a virtual machine. But I think that’s temporary.”
Containers on the Chip?
The major crossover for virtualization technology was when it appeared on the chip. Is a similar evolution in store for container processes?
“I think that’s where the game change is going to actually happen. From a performance perspective, I think containers have already given us that boost, mainly because we don’t have to go through this virtualization layer. You have raw access to underlying devices, file systems, CPU and memory,” Hightower said.
“Containers go hand-in-hand with these cluster managers that really take advantage of the way people want to deploy containers. You want to treat the node as this abstract set of resources, and put something on top. The thing we put on top will need a little bit more visibility from the system. This is where I think Intel will help a lot, by being able to expose a lot of that telemetry data and then providing some security boundaries that we may not be able to do at the OS level.
“When you look at the OpenStack platform, there’s a lot of tooling there around networking — which is something that you need in both worlds, whether it’s virtual machines or containers. There’s tooling around storage — again, something you would share between both of those ways that you choose to carve up your infrastructure. OpenStack itself has a lot of the primitives required for both worlds, but there’s nothing stopping OpenStack from evolving — and even diverging — from Amazon EC2 in a way that will make it a little more viable.
“OpenStack can evolve into a world where you have this OpenStack controller that you point at your data center, and you end up with a set of APIs for deploying applications. That’s what people want,” Hightower said.
“If you’re a cloud provider, you may want OpenStack to be your EC2 clone — where you have policy, and a dashboard, and you give out virtual machines, and you have a way of billing people.
“I see the potential of OpenStack in making sure that they stay in tune to what the industry and the community wants to do, which is run applications over a set of hardware, and make sure that they leverage those open source projects that jibe well with that mission.
“OpenStack adds a lot of value in terms of getting machines to the point where they need to be, getting the network in the shape that it needs to be, and then offering value on top such as the GUIs, dashboards, policies, and other things that will complement a system like Kubernetes well.
“I honestly think the biggest problem that OpenStack faces today is that it doesn’t have its own identity. That may be by design, but it’s one of the downfalls if you start your life as a clone of something else — it’s very hard to capture the full vision of why AWS works the way it does. It can even come down to just people. You don’t have the standardization of only having to support one platform, which is Amazon cloud environment. Now you have to try to please any number of organizations; they want to do things a certain way; you have voting rights, boards, foundations. There are a lot of things that get in the way of just shipping software to a target market.
“For OpenStack the challenge will be: can OpenStack become convenient enough to use and deliver on the original promise, faster then it becomes a no-brainer just to run through someone else’s infrastructure, aka the cloud?”
Docker and Intel are sponsors of The New Stack.