The Moby Project Post-Kubernetes: 3 New Releases in 2023
The last major release for open source Moby Project was in 2020, but this year will see three major releases, according to two Moby contributors.
The Moby Project is a collection of components that can be used to build container-based systems, including a container runtime, a container registry, container build tools, orchestration tools and network, logging and monitoring tools. These components can be used to build container-based systems such as cloud native applications, microservices architectures, CI/CD pipelines, and on-premise container platforms.
Moby maintainer Bjorn Neergaard is a senior software engineer at Docker. He was joined by technical steering committee member Sebastiaan van Stijn, also a staff software engineer at Docker, in presenting about the Moby Project at DockerCon earlier this month. They offered details of the major 2023 releases, as well as plans for the future.
Pre-Moby: A Short History
Neergaard and van Stijn started their DockerCon talk with a brief history of Moby as an open source project. It goes back to the days when developers first used containers as lightweight virtual machines that were hard to use and very niche, said van Stijn.
“It was not widely used because it was just too complicated,” van Stijn said. “It was hard to keep in sync; there was no distribution of images or anything.”
Then dotCloud, a small platform-as-a-service that would become Docker, started offering services. It turned out, though, what really interested technologists was what dotCloud was doing behind the scenes: They were deploying technology containers, mostly written in Python and requiring a lot of scripting to make the containers work, van Stijn explained. dotCloud decided to open source what they had been using internally.
Then, in 2013, Solomon Hykes, founder of Docker, presented Linux containers during a lightning talk at PyCon.
“It was five minutes, but it was causing quite a ripple in the industry because within those five minutes, he showed a docker run for the first time,” van Stijn said. “That Docker run did a lot of the work that he needed to do with LXC, but just in the single command.”
Docker was still a wrapper around LXC at that time, with LXC doing all the heavy lifting. It provided an easy way to use UX but also an image format — which made a big difference because now developers could use an image instead of creating their own file system for a container. There was no build at this time. It also provided an API, which allowed developers to do “cool things,” he added.
“It did cause a big impact on the market because for the first time, Linux containers became a reality and got in the hands of developers,” he said.
LXC was working, but Docker decided to rewrite the runtime to create a native runtime built into the Docker Engine, van Stijn said, which later on proved to be important as more things were added, such as networking. Containers caught on, but each had a single task, which meant programmers needed more than one container in most stacks. That led to the first attempts at orchestration, which later became composed and allowed you to define a YAML file.
Docker acquired Fig, which became Docker Compose. Then Docker started Swarm, which in version one allowed developers to run their containers in a cluster of machines. Then Kubernetes came online and decided to use Docker as a runtime because it was the de facto standard for running containers, van Stijn said. That led to a bit of a problem, as more people requested features that were clearly out of scope for the project, he added.
“Kubernetes didn’t need the networking stack of Docker, they didn’t need other things that we were providing, but they were still using the runtime and things were challenging at times,” he said. “The engine as a monolith became more and more of an issue.”
Also, while Docker was the de facto standard, there was no formal specification of images for containers or how the runtime should behave, he said.
“The implementation was the specification and that’s not always ideal,” he said.
Docker decided to split out the actual runtime. Around the same time, the OCI, a standards organization, started. Docker donated the specifications it was using for both distributing the image, as well as the runtime specification and images.
“Now other people could implement the runtimes, and images and registries, not just Docker.”
Docker also started to rewrite the runtime from scratch with a couple of partner companies, which led to containerd (pronounced “container D”), a complete rewrite of the runtime parts of Docker.
The Birth of the Moby Project
The Moby Project started when Docker decided to split the project even further into smaller components, because people wanted to use containerd and other parts of the Docker engine, van Stijn told the audience at the presentation. That led to Build Kits for building, Swarm Kit for orchestration, and the Docker engine. The CLI became a part of the Docker products, as a separate project, he added. The runtime itself became the Moby project.
“That could be used for others to build on top of, participate in, but it also made it easier to accept changes that may not directly benefit Docker as a product, but could be used by others — and vice versa,” he said.
Docker itself also changed with the enterprise products going to Mirantis, while Docker went back to its developer-oriented products. Docker became focused on Docker desktop and work slowed on the Moby project until the past 18-24 months, when maintainers for Mirantis and Microsoft joined the effort, he added.
“One of the things that confuses people is, you know, what happened to the open source code known as Docker,” explained Neergaard. “But maybe it also helps explain a little bit that there are more participants — and not just participants, but people who are stakeholders in the project — than just Docker, Inc.”
Besides Mirantis and Microsoft, Nvidia recently contributed container device interface support, Neergaard added.
What’s Happening Now
“In the relatively recent paths, but we’ve also been seeing a lot more activity in the project,” said Neergaard. “That’s visible in various forms, [but] it’s not maybe communicated the best.”
The most recent release of the Docker engine was in 2020. Between then and now, there had been a lot of code and improvements that never had found a release vehicle, he added.
This year already has seen two major releases — version 23.0 and 24.0, with the major features being:
- BuildKit, on by default (no more DOCKER_BUILDKIT=1). BuildKit is a rewrite of the builder, Neergaard said. “Part of BuildKits original mandate and plan was to have it replace the classic legacy builder in the Docker engine and provide a much richer and more flexible build platform, that still is as simple as Docker build,” he said. “So BuildKit is on by default now.”
- CSI (Container Storage Interface) in Swarm
- Alternate containerd shims:
The alternative shims are “maybe boring,” said Neergaard, but open up a lot of possibilities, “especially for people inventing new ways to run a container or run something that looks kind of like a container, like WebAssembly.”
The team had hoped to offer a third release, 25.0, ahead of DockerCon, but that didn’t happen. It’s expected to release any day now, according to the presentation. That release will include:
- CDI (Container Device Interface)
- OTEL (OpenTelemetry) integrated into the engine
- “Graceful” health checks with health-start-interval, which has long been a pain point, Neergaard said.
“I do think that we will be ending up with three releases of the engine this year, but it’s still a significant challenge; we are getting better every time,” Neergaard said.
“Another thing that’s interesting, that surprises people, is that there are still new features in Docker Swarm,” he said. Swarm is Docker’s answer to Kubernetes, he explained.
“At this point, I would say that Kubernetes is very much the de facto platform for orchestration, and you probably shouldn’t choose something other than Kubernetes unless you have a very good reason,” Neergaard said. “There is a small but still very vocal group of users who enjoy using Swarm and who want Swarm to do more things — or even be compatible with a lot of the add-ons and extensions that exist for Kubernetes.”
Future plans for the Moby Project include adding multiple snapshooters and native multi-architecture in containerd, as well as a redesigned CLI, bug fixes and new features for networking and moving the reconciler logic from Compose into the daemon for Declarative Docker, the pair added.