How CRI-O Would Put Kubernetes at the Center of the Container Ecosystem

The open source project, CRI-O, formerly known as OCID, seeks to enable the open source Kubernetes orchestrator to manage and launch containerized workloads without relying on a traditional container engine.
The software could help would DevOps professionals to manage the full “container lifecycle,” by interfacing with Kubernetes, or a commercial implementation of Kubernetes ( such as CoreOS Tectonic), by means of a Container Runtime Interface (CRI) being developed by Kubernetes engineers led by Google.
Developers need container engines for creating and building container images and may prefer to use their own staging environments for local testing. Administrators and operations teams, however, might find the emerging Kubernetes stack — the orchestrator, CRI, and CRI-O — more suitable than pairing the orchestrator with standard container engines, for managing more complex production environments.
This project puts the container orchestration tool, not the container engine, at the chief component of the container stack. The CRI, its contributors tell us, would allow Kubernetes to use use any container engine that is compliant with Open Container Initiative specifications, including OCI’s own runc engine, which that can do many of the things that a branded container engine like Docker or CoreOS’ rkt can do, including pull images from a registry, except that it won’t build images from a makefile.
What CRI-O Is… Today
Though Open Container Initiative, for its part, has distanced itself from responsibility for CRI-O (even though its members, and the contributors to CRI-O, are in many cases the same vendors with the same people), the project is “the natural progression of OCI,” which is developing a standard interface for container runtimes and images, said Google staff developer advocate and lead Kubernetes engineer Kelsey Hightower, in an interview with The New Stack.
The CRI-O project’s principal assertion is that users shouldn’t have to rely upon the engine that creates the workload to stage it. As originally envisioned, the project would give Kubernetes the tools it needs to serve as the complete lifecycle manager for containers, without any need for Docker, rkt, OpenShift, Photon, or any branded container engine whatsoever.
“We don’t really need much from any container runtime — whether it’s Docker or rkt, they need to do very little,” said Hightower. “Mainly, give us an API to the kernel. So this is not just about Linux, right? We could be on Windows systems. And if that’s the direction the community wants to go in, we need Kubernetes to work to support these ideas — because it’s bigger than Docker Inc., at this point.”
Underlying that assertion is the assumption that the orchestrator lies at the center of the container ecosystem, and that the “engine” as we have come to know it is really a development tool.
On the other hand, CRI (the API being developed by and for Kubernetes) would give container engine makers the opportunity to implement an open interface to Kubernetes, so that environments that do include a container engine can make the appropriate connections. These connections, says a key Google engineer, may be made without the container engine vendor having to “refactor” that engine to achieve Kubernetes compatibility.
Instead, a layer of abstraction called a shim may be inserted in-between the container engine and the orchestrator. How vendors implement such a shim would be up to them.
When completed, the CRI API (the part that connects with Kubernetes) will delegate more container lifecycle control to the kubelet — the manager of a pod, which is an exclusive concept to Kubernetes — and will have to be adopted by the container ecosystem.
The goal for the next release of Kubernetes, version 1.5, is to include a finalized CRI to enable a kubelet to communicate with Docker, rkt, China-based container provisioning platform Hyper.sh, and CRI-O — whose development is being led by Red Hat.
“There’s a bunch of different container runtimes, all interested in communicating with Kubernetes,” said Philips. “So instead of trying to build every single interface for each container runtime into the kubelet itself, we’re creating a more abstract interface that other people can plug into, without being directly involved in Kubernetes upstream work.”
Who Refactors What
Hightower described the Container Runtime Interface (the “CRI” before the “-O”) as an abstraction that represents the basic features that a container engine should support, for Kubernetes to certify it. Once the CRI is complete, he said it’s Kubernetes’ plan to refactor its own code base to implement the CRI.
If CRI-O succeeds, he explained, any producer of a container engine would not have to make modifications to the code base of that engine, simply to interoperate with Kubernetes.
“Right now, if you want to play nice with Kubernetes, you’re going to have to build a bunch of things, and probably modify the way you do things today, in a very unclear way,” Hightower admitted. “You’ve got to go look in the code base today to figure out, this is what we did for Docker, how do you modify that for your runtime engine in a way that works well for you, and also plays well with Kubernetes.”
As CoreOS’ Philips explained, each of the container engines will utilize a shim — a component that translates API requests from the engines’ native lexicons, into a form that Kubernetes may digest.
“Because of how the CRI works, you need a GRPC daemon that is listening for the requests,” said Philips, “that can communicate with the kubelet.” In turn, he said, the kubelet will send a remote procedure call back over a socket to whatever engine implements the CRI.
“The existing Docker and rkt support is being pulled out into CRI interfaces,” Philips explained. CoreOS’ rkt implementation of CRI is currently available on GitHub as rktlet. He expects both rktlet and whatever Docker’s implementation ends up being called, to be internally refactored into CRI.
While Docker already requires a shim to work with Kubernetes, Google’s Hightower told us, it was Kubernetes’ engineers who produced that shim, not Docker’s. Regardless of who will implement the CRI shim, said Philips, Docker will be refashioned to cooperate along with everybody else.
“Changes are happening in both the integration of Docker Engine and the rkt engine, in order to integrate with CRI”–Brandon Philips, CoreOS
The final standard for the OCI image format is still being determined, although an OCI spokesperson has informed The New Stack that two more release candidate iterations remain before the OCI image format can be generally released as version 1.0.
In the meantime, Docker continues to augment its container engine, bundling in features such as its own Swarm orchestrator and service discovery.
“I think that is all good and well,” he said. “Of course, people may not like that — that’s okay, everyone’s allowed to have their opinion. Kubernetes — we also provide a bunch of things. But we tend to believe we’re just going to do it on top of what we consider a commodity.
Kubernetes and Beyond
“There’s a lot of things that you need to know in order to implement what we call a pod correctly,” Hightower explained. “And pushing that burden down to every container runtime is unfair to all those container runtimes, to have to implement that much code specifically to have fun with Kubernetes. Think about it: They’re going to have to do something different for Mesos, something different for Swarm — whatever. To make that easier, we’re going to bring in the Kubernetes-specific logic and leave that inside of the kubelet where it belongs. And on the outside, we’ll just use something that’s friendly to what the native container runtimes already do.”
Assuming that’s exactly what happens, an interface that is friendly to the existing containerization vernacular could abstract pod-oriented, kubelet-based logic in such a way that the same API could interface with something other than Kubernetes, abstracting its own logic in a different way.
We explored this possibility with Mesosphere founder Ben Hindman.
“What I think the industry is really looking for is, components that can be composed,” Hindman explained to The New Stack. “And I think that, in Kubernetes case, this is really critical. Kubernetes was relying on Docker to do container management, and they were trying to build in orchestration. When Docker merged Swarm in, now they had a container manager that was also doing orchestration. So just from an architectural perspective, wearing my engineering hat for a second, I think it’s very reasonable to want to say, ‘Hey, wouldn’t it be great if we just had the component that was doing container management… that multiple people could potentially leverage?’”
Hindman credits Docker, Inc. with having had the initiative to make runc an open standard. But full orchestration requires more than just interoperation with the runtime, he said.
“There’s more to it. There’s downloading the image, unpacking the image — there’s more things that have to actually get done. And to me, I think what has been a big debate in the industry is, should that stuff also be factored out and componentized or not? It’s less about a fork, and about what makes sense architecturally.” [Ben Hindman, Mesosphere]
Mesosphere’s DC/OS environment also has these components laid out, Hindman explained, already without having to rely upon runc or any Docker component. The true objectives for the container community, as he spelled it out, should be to designate the architectural boundaries between components, and establish the proper interfaces between them.
Does this mean Mesosphere supports CRI-O, whose objectives — as Kelsey Hightower explained to us — appear completely compatible with what Hindman projected?
While Hindman does not speak for the OCI, it’s important to note here that Mesosphere is one of OCI’s founding members. As Hindman responded, OCI’s original purpose was to develop a common runtime format in such a way that runc could launch it as a container. The containerization community also cared about the image format, which involves the file system and metadata for containers when they’re at rest. So OCI has taken up that cause as well. “That’s actually more of interest to us,” Hindman said, “than the runtime format.”
The reason why Mesosphere embarked upon a so-called “universal containerizer,” Hindman continued, is to enable producing containers in all open formats, including OCI.
But in such an optimum architecture, there may not be a way to standardize the scheduling of workloads, he said. The features of schedules are simply too different from one another. As a result, efforts to date at finding a single configuration file, metadata file, or manifest for describing workloads in such a way that any scheduler could make full use of it for deployment and launch, ends up with what Hindman calls “a lowest-common-denominator specification” that precludes its use by a scheduler with a broader feature set.
Deciding upon a common image format, however, is a much simpler matter, he said. It comes down to whether Linux supports the format. “If Linux supports it, we can expose it. I don’t think there’s much of a debate over what that image format might want to look like; therefore, treating it as a standard is totally fine.”
Mesosphere will continue to support OCI, Hindman concluded, and will actually support CRI-O (“OCID,” at the time of our discussion) to the extent that it supports OCI. But Mesosphere’s “universal container runtime” will accomplish this support in a different way than will CRI-O.
And that leaves us looking towards a competitive market around the manner of orchestration, rather than what’s being orchestrated.