Project Atomic was started in response to the widespread adoption of Docker technology. It fills the gap between the infrastructure available today and the infrastructure required to run Linux containers.
I am sure some of you may be thinking, “Why do we need a different kind of infrastructure for running containers?” To answer this question, you need to know a little bit about container technologies and how containers are changing the way applications are deployed.
Thanks to the rapid rise of smart phones, tablets, and internet-enabled devices, applications are now required to scale more based on demand rather than being 100 percent operational, without down time. Additionally, there is the need to lower costs, develop and innovate with greater agility, and deploy applications faster (from development to production). If you have already tried to achieve these goals with your applications, there is a high probability that you have come across the ideas of microservice architectures and service-oriented architectures (SOA).
SOA and microservices are not new concepts. However, Linux containers and Docker technology have made them more realistic to implement as compared to the past. A lot of the credit goes to Docker for making Linux containers easy to run. The other important piece has been the rise in availability and reliability of cloud infrastructure, both private and public. Cloud infrastructure is designed to have 100 percent up time. This does not mean that the components used in the infrastructure do not fail. Instead, the whole system is designed to handle failures without downtime for the application. This design also provides the flexibility to update or upgrade the infrastructure without any downtime.
Linux containers are a great way to implement microservices. Linux containers run in isolation, so when an applications is packaged as a container, it is an independent entity. The container does not depend on the base operating system for runtime dependencies. Containers, when compared to virtual machines, generally take fewer resources to run, have faster startup times, and are portable in the same way. The same container can be built and packaged by development teams, and can pass through directly to testing then production. These attributes make containers a good fit for use with cloud platforms and microservices.
However, Linux containers also bring a new set of challenges, specifically:
- The infrastructure now must be able to scale horizontally and should be specially optimized for running containers.
- Tools are needed for orchestration and life cycle management of containers.
- It should be easy to deploy multi-container applications and manage them — without losing sleep over it.
- Security is a critical issue with all applications and infrastructures. Docker leverages namespaces and capabilities provided by the Linux kernel. While these features provide some level of isolation between containers on the same host and between the host and the containers, it isn’t perfect. A proper container infrastructure needs to bring additional features, like security-enhanced Linux (SELinux), that can provide an environment with virtual machine-like isolation between containers and hosts.
- When we run applications inside of containers, the runtime dependencies move from the host to the container. The host OS is only required to provide the runtime for the Linux container technology. This changes the role of the host OS dramatically. This gives us an opportunity to take another look at the base OS.
To meet the challenges of running containers, we need a new infrastructure. The fundamental building block of this new infrastructure is the operating system. Project Atomic was born to solve this problem.
Expectations of an OS Designed for Container Environments
The operating system should be lightweight and optimized for cloud and non-cloud environments. The OS needs to be able to be used as a service. This is a critical design consideration. In order to deploy an OS as a service, the OS must be consistent in all environments (i.e., every time a new instance of the OS is spawned). A typical web-scale environment needs thousands of instances of the same OS. Maintaining a consistent environment across hundreds or thousands of installations is not easy to do with traditional operating systems. A key challenge is making sure that all instances are running the same version of the OS, and at all times ensuring that the application is not broken after an upgrade or updates (for example, security updates). It is also critical that all systems have the same updates applied. To solve this we need an OS where updates are atomic (either all applied or not applied), and it is read-only as much as possible to avoid accidental bespoke configuration.
With consistency accounted for, the OS can also be optimized for Linux containers. The OS can be trimmed down to the core, eliminating all unused packages. Having a minimal operating system with a small number of packages also improves the maintainability of the base OS, as the system administrator has fewer packages to keep track of. The reduced number of packages also reduces the surface for security problems, as no unnecessary code is in the OS.
These problems gave rise to Project Atomic.
To begin with, it is important to realize that Project Atomic is not a new GNU/Linux distribution. Instead, it is a framework to create a lightweight and minimal OS from Red Hat Enterprise Linux, CentOS or Fedora. The idea is to derive a new OS from existing, proven and trusted packages without creating a new distribution that would require a whole new set of testing to be able to be trusted. The new OS, when created, is called Red Hat Enterprise Linux/CentOS/Fedora Atomic host. This new OS can inherit critical features of the underlying distribution, such as the support of SELinux and systemd that is baked into Red Hat Enterprise Linux/CentOS/Fedora.
There are couple of thing Atomic Host does not provide:
- YUM commands (e.g., install) will not work inside Atomic Host.
- There is a possibility that your favorite tool or RPM package might be missing from the official Atomic Host images from Red Hat Enterprise/CentOS/Fedora as it will have a minimal package set.
Atomic Host consists of these components:
This is derived from the Gnome OSTree project. RPM-OSTree provides a mechanism to do fully atomic OS upgrades and rollbacks. It works similarly to the way you would expect Git to work on an OS level. With one command the whole OS is upgraded, and with another command it is completely reverted to a previous version.
RPM-OSTree makes the file-system immutable — that is, read-only — except /var and /etc. /var is not read-only because Docker images will be stored there. /etc will have configuration files and it needs modification for orchestration tools to work. However, any changes made to /etc are propagated forward during upgrades through a three-way merge. The read-only feature makes sure all Atomic Hosts have the same package set, which in turn provides much needed consistency across multiple installation of the Atomic Host. All data (e.g., containers in /var) are unchanged before and after upgrades.
RPM-OSTree make us more confident in updating the OS, as we know that a reliable rollback system is always available. Also, there won’t be any halfway upgraded systems, as upgrades are atomic in nature.
Atomic Host has built-in Docker runtime, so you can just run a Docker container once you log in to Atomic Host.
This is an open source project for managing containerized applications across multiple hosts. It gives basic mechanisms for deployment, maintenance, and scaling of containerized applications.
Atomic Command Line (/usr/bin/atomic)
The goal of the Atomic command is to provide a high-level, coherent entry point to the system, and fill in gaps in Linux container implementations. It uses the metadata associated with Docker containers.
Here’s a couple examples of Atomic commands:
- atomic run http grabs the label run with its all command line details and executes them.
- atomic install foo can install a container with a required systemd unit file to run it as a service.
The atomic command is also available on non-Atomic platforms. You can run it on a standard RHEL, Fedora and CentOS. For details refer to, “Introducing the Atomic Command.”
Cockpit is a server manager for administrating Linux servers via a web browser. It is designed to manage multiple servers. It now supports Docker and Kubernetes, and in the future it will support Nulecule and Atomic App.
Super Privileged Containers (SPC)
As mentioned earlier, the idea of Atomic Host is to create a minimal OS and ship everything else as Linux containers. This, from an abstraction point of view, make sense. However, sometimes we need specialized tools to debug or monitor issues, and so we need to ship these as containers. These containers need more privilege — and the security turned down — so that they can work as expected.
SPC is a container that runs with the security turned off (privileged), and it turns off one or more of the namespaces of the host OS into the container. This means it is exposed to more of the Host OS. More details are provided in, “Introducing a Super Privileged Container Concept.”
Nulecule and Atomicapp
Additionally, an Atomic Host supports specifications and system tools for multi-container applications:
Nulecule is a specification which defines a pattern and model for packaging complex multi-container applications. It references all their dependencies, including orchestration metadata, in a container image for building, deploying, monitoring, and active management.
Atomicapp is a reference implementation of the Nulecule specification, according to GitHub. It can be used to bootstrap container applications, and to install and run them. Atomicapp is designed to be run in a container context.
Lalatendu Mohanty is a senior software engineer at Red Hat, a sponsor of The New Stack. Docker is also a sponsor of TNS.