CRI-O, the Project to Run Containers without Docker, Reaches 1.0
The open source project, CRI-O, formerly known as OCID (simply OCI daemon), which enables the Kubernetes open source container orchestration engine to run containers without relying on the default Docker runtime, has reached 1.0 status.
So far, CRI-O works with runc and Intel’s Clear Containers as the container runtimes but is designed to allow any OCI-compliant runtime to be plugged in.
The project “opens the door for plugging alternative container runtimes in the kubelet more easily, instead of relying on the default docker runtime. Those new runtimes may include virtual machines-based ones, such as runv and Clear Containers, or standard Linux containers runtimes like rkt,” Red Hat senior engineer Antonio Murdaca wrote on the Project Atomic blog.
As originally envisioned, the project would enable Kubernetes to be the complete lifecycle manager for containers, without any need for any branded container engine.
The founding team wanted to enable a kubelet to communicate with Docker, CoreOS’ rkt, China-based container provisioning platform Hyper.sh, CRI-O and others.
“There’s a bunch of different container runtimes, all interested in communicating with Kubernetes. So instead of trying to build every single interface for each container runtime into the kubelet itself, we’re creating a more abstract interface that other people can plug into, without being directly involved in Kubernetes upstream work,” Brandon Philips, CoreOS chief technology officer, told The New Stack previously about the Container Runtime Interface (CRI). It’s a plugin interface that gives kubelet the ability to use different OCI-compliant container runtimes without the need to recompile Kubernetes.
The CRI-O project resides in the Kubernetes incubator and involves contributions from IBM, Intel, SUSE and others. It supports the OCI format and the OCI runtime, but it’s not part of the OCI project per se.
The project has been viewed as evidence of a split in the container ecosystem, although those involved in it maintain it’s not a “Docker fork.” But tension within the community rose when Docker made its own orchestration engine, Docker Swarm, a part of its Docker Engine.
“Docker was heavily integrated into Kubernetes, and vice versa. They relied heavily on specific versions and that was kind of a rough interface,” said Joe Brockmeier, senior evangelist, Linux containers at Red Hat.
“They both moved at different paces. A new release of Docker might break Kubernetes … With Docker trying to innovate and change things, it became [clear] that there needed to be a way for Kubernetes to talk to the container runtime in a way that allowed the container runtimes to move at their own pace while remaining compatible with Kubernetes and allowing Kubernetes to work at its own pace.”
Red Hat’s Daniel Walsh wrote in the release blog post: “We felt at the time that the upstream Docker project was changing too quickly and was making Kubernetes unstable. We felt that perhaps by simplifying the container runtime we could do better.”
The project’s goals, he said, were to be lighter weight than other container runtimes, to have a smaller footprint, and have better performance for Kubernetes than other container runtimes.
CRI-O can pull images from any container registry, and handles networking using the Container Network Interface (CNI) so that any CNI-compatible networking plugin should work with it.
When Kubernetes needs to run a container, the Kubelet speaks to the container runtime, with CRI-O interface. It will speak to the CRI-O daemon, then speak to the container image library and storage library to pull an image and get it ready, set it up on storage, then coordinate with runc (or another OCI-compliant runtime) to start that image. When Kubernetes needs to stop the container, CRI-O handles that, too, Brockmeier explained.
“Why is this interesting? And our response is that in a way it’s not. It’s boring,” he said. “It’s something as a user you don’t have to worry about. The people who want to do container orchestration don’t really care at that level. If you’re running a script on a Linux box, you don’t really care if the individual box is set up to use Kornshell or Bash, you’re going to write a script to run across all of them. If you have an OCI-compatible container, you just care that Kubernetes can run it.
“Docker has innovated and added a bunch of stuff that’s way above that, but for Kubernetes, is way overkill. It runs the risk of complicating running containers for Kubernetes.”
Not a Developer Tool
In a blog post, Brockmeier explained that CRI-O is not a developer tool for building images.
While CRI-O does include a command-line interface (CLI), it’s provided mainly for testing CRI-O and not really as a method for managing containers in a production environment.
It’s been focused on the recently released Kubernetes 1.7 because that’s the version the next release of OpenShift is based on. Do some testing with OpenShift online, and if that goes well, may see it on OpenShift online in production. It will be a technical preview in OpenShift 3.7.
The first version of CRI-O is based on Kubernetes 1.7, because that’s the version the next release of OpenShift is based on, he said. Red Hat plans to do some testing with OpenShift online, and if that goes well, customers may see it on OpenShift Online in production. It will be a technical preview in OpenShift 3.7, he said. Future versions of CRI-O will match the version number of Kubernetes that they support.
Though the project has its roots with Red Hat and Google, it’s encouraging other potential contributors to get involved.