TNS Tutorial Friday: The CoreOS rkt Container Engine, What It Is and How It Compares to Docker

In this week’s episode of The New Stack’s Tutorial Friday, we explore CoreOS’ container engine, rkt, discussing how developer choice and flexibility have affected the container ecosphere, and gain an inside look as to the healthy rivalry between rkt and Docker. The New Stack founder Alex Williams sat down with Josh Wood of documentation operations at CoreOS to discuss these topics and more.
Wood began the demonstration by diving into the features that make rkt stand out from other container management tools. First, rkt does not impose a daemon which will then spin idly in the background of a system, standing between containers and their management tools. Once an environment has spun up, rkt will exit and allow users to interact with their containers fully. Rkt uses an ACI (Application Container Image) format, as does Kubernetes, to build its images. Wood noted that although this differs from Docker’s container format, rkt fully supports Docker images, caching directly the Docker Hub with ease.
After opening a terminal, Wood entered a sudo command in rkt to launch an instance of Alpine Linux. Another point of differentiation between rkt and Docker is that rkt makes use of pods, as do Kubernetes clusters. “Not only is it convenient to have that one-to-one mapping, what it means is when you build really minimal containers, a pod gives you a way to share resources around them to make those kinds of really micro-scaled architectures make sense,” Wood said.
To get a better look at rkt’s permission structure, Wood then navigated into the Alpine Linux container CLI. This allowed him to access information such as what containers were currently running by entering the command rkt list in the terminal. Wood noted that rkt list images will list what specific containers a user has pulled from. Next, Wood entered the command ps axf to look inside the process tree on the Alpine Linux container’s host. “We can find this shell we just kicked off running inside of what is literally an illustration of hierarchical terms of what we call the higher process model. Rkt spins up an initialized stub system inside the container system to manage process lifecycles, so you don’t have to re-implement init to manage processes inside of containers,” Wood explained.
Though rkt and Docker have a healthy competition going, Wood was quick to note rkt supports Docker images, fetching and converting them from the Docker Hub directly. To demonstrate this, Wood began to spin up a Caddy web server using the command Sudo rkt run docker://joshix/caddy. The task of managing network manipulations is handled in rkt by another CoreOS project, the Container Network Interface (CNI). In addition to being a network plugin model for Kubernetes, Wood noted that CNI also integrates directly with rkt.
In contrast to CNI, Docker makes use of libnetwork for managing networks in containers. While libnetwork remains a solid choice for those looking to get started quickly, Wood noted that “With libnetwork, you’re sort of forced into this local versus global dichotomy, and the model they see for networks. In many cases, that choice is the correct choice. But, the further you move toward platform builders, integrators, and people assembling tools into their own solutions, the more you need flexibility.”
As the conversation drew to a close, Wood emphasized an important factor to consider in the discussion surrounding standardizing containers, “A lesson we feel we’ve maybe learned before, is to not have anything resembling a single vendor controlling the specifications for something. We think it is important that there is a standard that we, the entire internet community of users owns, and can feel real ownership for. That standardization of what the container is, how high you can stack them, and how much load you can put inside them.”
CoreOS and Docker are sponsors of The New Stack.
Feature image: “Rocketscape (Version: 2011 Dark)” by Tylana is licensed under CC BY-SA 2.0.