TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Containers

Q&A: Docker’s Michael Crosby on How Libcontainer Enabled Kubernetes

A Q&A with Docker's Michael Crosby about libcontainer.
Aug 5th, 2019 1:17pm by
Featued image for: Q&A: Docker’s Michael Crosby on How Libcontainer Enabled Kubernetes

Software engineer Michael Crosby started working on the core team at Docker in 2014. He is still an engineer at Docker, working on the libcontainer kernel interface and the runc container runtime as well as overseeing the open source community around those projects. In 2018, he was elected as the Open Container Initiative (OCI) Technical Oversight Board Chairman.

Starting on release .04, Crosby took charge of libcontainer, a daemon for Linux and Windows that manages the complete container lifecycle of its host system. Originally written to stop Docker from crashing every time an upgrade was released, libcontainer was open sourced early on. Crosby’s dependency-agnostic design was perfect for Kubernetes and other languages that were then able to build on top of libcontainer.  Libcontainer is now part of runc.

At DockerCon 2019, we sat down with Crosby to talk about containerd’s beginnings and his work over the years on one of the most well-known open source projects of the last five years.

What did you start to find in those early years that led to the development of libcontainer? 

We didn’t really have a roadmap. Back then there was just [Docker Founder] Solomon Hykes’ vision. We had a lot of freedom, and I noticed we were having issues release after release with LXC. Things would break every time there was an update, so I just went off the road with libcontainer.

What was the immediate goal for libcontainer at the time? 

It was to remove the dependency on LXC . So whenever you deployed Docker, you got static Docker binary and you were good to go. There were no other dependencies at the time.

How did that fit into the real scale-out after 1.0?

At the time, within Docker, there were a lot of new concepts and features coming out.  The way libcontainer was architected, you could share a network namespace among multiple containers.

At the time we didn’t know what pods or Kubernetes were but the new architecture enabled them and led to those type of things happening.

And the new architectures were?

Just how the library was created, LXC was very, “here’s a config, do this.” And, libcontainer was, “here’s some parameters for how to create a container,” and you can stitch those together easier, so you had more flexibility to start stitching multiple containers in a pod together and stuff like that.

People started thinking, “now that we can do this, how do we make pods and where to do that?” So I got others in the community thinking how to handle stuff.

And how did that then cycle back into your work?

I was chief maintainer of Docker for a while. As the scale ramped up, I focused on the long-term aspects. I got libcontainer and then the file systems and then once we did the OCI, I worked on a lot of the standards at the time.

Docker is a super active open source project, so I was familiar with our pull request issues and community support. With OCI, it was just learning more about open governance and, getting things through the Linux Foundation so, it was myself and some other people at Docker that helped set up governance and then the foundation started and the initial contribution was libcontainer.

With libcontainer there was a specification for how containers were created and that was poured out into the runtime spec that we have today. And so there were the two specs and then the runc code which is all built on libcontainer which were the two initial products in the OCI.

How were you participating with other people in the community at that point? There were the Docker maintainers, but there was also a number of other parties involved.

The initial code came from Docker, then we had a lot of interest from other companies within the community. As we got consistent contributions, some people became maintainers within OCI and received write access. We grew that team and over the years included people not at Docker. There’s a small community of people that have all worked on the libcontainer runtime code. We know each other, we’ve worked together a long time, so a lot of those people are maintainers within OCI today.

What were the tools you were building internally to help with the level of contributions you were getting?

Originally, the Docker command interface ran on my personal computer. I had a cloud instance set up and then we got teams to build out the infrastructure that runs all the tests on every pull release that someone makes today.

What were some of the efficiencies that you started to see in OCI that helped take Docker to the next step?

With the OCI, we were starting to get a lot more contributions from companies, so we needed standard that everyone could rally around and start building on top of. Once we got the OCI standards locked in the 1.0 runtime spec, [we] started seeing things like [Amazon Web Services’] Firecracker, and other projects. That spurred a wave of innovation on the runtime level.

After we donated our registry specification on how the registry works and what the image format is, then we started seeing various image registries like Google Container Registry as well.  The standards spurred on additional pieces that people can use.

What’s your reflection now, back on those times? It was a more difficult time in the container ecosystem; there were a lot of market pressures. 

I think it’s an issue for all resource projects in saying “no.” As maintainers, saying no to contribution is hard because some people take it two different ways. As long as you give a clear explanation of why you’re rejecting this, most people take it well.

Then you can get some people within the community who say, “Oh, because they’re part of my company they’re rejecting this.” As an open source maintainer, every time I’ve said no has been because it was poorly written, bad design, or it didn’t fit well within the overall architecture of the project and I think most of the maintainers work and feel the same way when accepting or rejecting contributions. So, there’s always balance for external contributions and it’s all how that contributor handles those.

 What are you working on now?

Containerd. It’s something I started probably a little bit before Kubernetes was out. I think we were sitting around at dinner one night and I was “Wouldn’t it be cool if we had a small runtime just for orchestrators to use like Swarm and stuff?” 

That was when we were talking about Swarm back in the day, and so I wrote containerd as this idea of having this really small runtime that could create containers very fast for orchestrators.

That took off because we had the integration work with OCI to do. So we made containerd the default runtime in Docker that integrated all the OCI specs. After Kubernetes came out, I was talking with Solomon one night, and said, “Let’s expand the scope of containerd to handle Kubernetes and all these various Runtimes.”

So I’ve been busy with that.

T.C. Currie assembled the text for this story.

Feature image by Frauke Feind from Pixabay

Want more?

Kubernetes vs. DockerSwarm: What’s the difference?

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.