Carina by Rackspace: OpenStack-powered Software for Running Containers-as-a-Service
Launched at the OpenStack Summit Tokyo in October 2015, Carina by Rackspace, now in preview mode, is one the early entrants into the emerging Container-as-a-Service (CaaS) market. Powered by OpenStack, Carina was created to offer best of both containers and bare metal infrastructure.
Why Did Rackspace Build Carina?
Rackspace has multiple reasons for investing in containers. Rackspace distinguished architect Adrian Otto founded Project Magnum, an initiative to make Container Orchestration Engines (COE) like Docker Swarm, Apache Mesos, and Kubernetes first class citizens in the OpenStack ecosystem. Otto’s active involvement at Rackspace in building the CaaS made him the natural choice for leading Project Magnum. He, along with his team, is building Carina at Rackspace to get first-hand experience of what it takes to deliver COE on OpenStack.
Rackspace wants Carina to be a reference implementation for running containers in the cloud.
Rackspace has been betting on bare metal servers as one of the key differentiating factors of its cloud. Dubbed as OnMetal, the service uses OpenStack’s Project Ironic to deliver a different class of IaaS. It’s one of the first commercial bare metal offerings based on OpenStack. Carina is layered on the bare metal platform to deliver best possible performance. CoreOS, the lean and mean Linux OS for containers powers the bare metal infrastructure making it the most optimized platform for running containerized workloads. The combination of bare metal and CaaS makes Carina a unique platform in the market.
The managed private cloud business is important for Rackspace. Many of its customers run isolated infrastructure that’s maintained and managed by Rackspace engineers. Carina is technically designed to run both on the public cloud and private cloud. With Rackspace venturing into managed services businesses through the partnership with AWS and Microsoft, it may even offer Carina as a managed CaaS that can be deployed in the environments managed by Rackspace teams. Like Joyent Triton, Carina is a CaaS that’s running the public cloud but with the choice of deploying on a private cloud. This makes Carina key contender in the CaaS market.
With strong roots in OpenStack, Rackspace wants Carina to be a reference implementation for running containers in the cloud. The company does not want to restrict the choice of COE to a specific implementation. Carina is designed to run Docker Swarm, Kubernetes, and possibly other COEs in the future. Project Magnum shares the same philosophy of running multiple orchestration engines. The current version of Carina is optimized only for Docker Swarm.
Rackspace was well aware of the risks in running CaaS in multi-tenant environments on bare metal. While its competitors chose to deliver containers running within VMs, Rackspace went out of its way to designing CaaS for bare metal. It’s a tight walk on the rope to deliver multi-tenancy without virtualization. Rackspace’s engineering team found a middle ground without compromising on performance and security. They exploited the capabilities of Linux kernel to erect strong barriers between barriers. In the future releases of Carina, customers can choose between VMs and bare metal.
One of the other design goals of Carina is to retain the native APIs and tooling experience. After provisioning the cluster, Carina gets out of the way letting the developers and ops team to deal with the orchestration engine through the native tools. In the case of Docker Swarm, users of Carina can rely on the familiar tools such as docker-compose to manage the workloads. The experience of using Carina is smooth and seamless for customers. The same philosophy is extended to Project Magnum in supporting native tools and APIs.
Docker Swarm on Carina
The first COE that Carina supports is Docker Swarm, which makes it one of the few CaaS offerings in the public cloud running Docker’s native clustering and orchestration engine.
Developers can use the portal or CLI to get started with Carina. The first step in deploying containerized applications is the creation of a cluster. Each cluster runs one or more Carina nodes. Nodes in the context of Carina are primarily LXC containers provisioned by libvirt in a physical machine. They shouldn’t be confused with Docker Swarm nodes which typically refers to a machine with the Swarm agent.
Rackspace uses LXC as the fundamental unit of isolation than using expensive and slower VMs. Docker containers are scheduled in each Carina node, which has a few implications. Unlike in traditional environments, it’s not possible to SSH into a node. There are also restrictions on mounting host file system as Docker volumes. Each Carina node comes with 20GB of disk space, 4GB of RAM, two vCPUs, and an IPv4 address.
Once a Carina cluster is created with one or more nodes, the credentials bundle can be downloaded to the local machine. This comes with the TLS certificates and environment variables to configure the local Docker client to talk to the Swarm endpoint of the cluster. After running the included shell script, developers can use tools like docker-compose to deploy and manage the application.
The experience will almost be similar when Carina supports additional COEs such as Kubernetes in the future.
Why should developers and ops teams use Carina? Here are a few capabilities that make it a unique CaaS:
Carina’s underlying infrastructure is constantly monitored and managed by the runtime. Customers can provide hints for reserving memory and compute power for containers. When 80 percent of reserved memory or CPU is consumed, an automated scaling action is triggered by Carina to increase the nodes. However, a cluster is never scaled down or deleted to avoid possible data loss.
Rackspace claims that Carina is optimized to deliver the best performance. This is due to the integration with bare metal servers and the LXC architecture for isolation.
Carina has a well-designed experience for managing the lifecycle of containers. The portal has an intuitive interface for provisioning, scaling, and rebuilding the clusters. The CLI offers a simple yet powerful mechanism to deal with the CaaS infrastructure. For automation needs, Carina has a dedicated API, which is also used by the CLI.
Data Volume Containers and Overlay Networks
Carina supports the latest capabilities of Docker such as dedicated volume containers and overlay networks. It’s possible to create frontend and backend networks to separate public-facing containers from the containers with sensitive applications. Docker data volume containers enable moving persistent data to dedicated containers that can be moved along with the application containers. As Rackspace keeps Carina up-to-date, the new features of Docker can be easily leveraged.
With the competition heating up in the CaaS space, Rackspace has to move fast with Carina. Its leadership in OpenStack is helping the company in surfacing some of the innovations like Project Magnum and Project Ironic on its public cloud. Carina’s success depends on how well it runs in hybrid scenarios where customers can seamlessly move containers between private cloud and the public cloud. It will also be interesting to see if Carina will support Windows Containers and Hyper-V containers.
CoreOS, Docker and Joyent are sponsors of The New Stack.
Feature Image by Samuel Scrimshaw licensed under cc0.
The article was updated on 22nd April to add an updated graphic along with the terminology.