Containers / Kubernetes

Rackspace’s Carina offers Turnkey Container Management

2 Nov 2015 11:08am, by

What is currently driving the containerization movement with developers may be, at least in part, the sophisticated and efficient mechanisms for deploying and orchestrating workloads: Swarm, Kubernetes, Mesosphere, Tectonic, Tutum.

But these dazzling symphonic displays of logistics have not done all that much to soften the barriers in many organizations to deploying containers outside of development and testing, and into production environments where the payoffs would be best appreciated.

Rackspace — whose key business has shifted this year from infrastructure to “fanatical” management services — is betting that business executives and decision-makers are actually turned off by the whirlwind nature of containerization. It’s also surmised that these people don’t want their organizations to be tackling the dilemma of co-existence — making their existing, VM-based workloads share infrastructure with what, to them, may as well be invaders from Mars.

At the OpenStack Summit in Tokyo last week, attendees received the world’s first peek at Rackspace’s beta of Carina, a container deployment service where the entirety of the orchestration takes place on Rackspace’s end of the bargain. Once this service becomes fully operational, customers would use Carina’s web-based console to instantiate and deploy scalable container images. From there, Rackspace’s admins take over the roles of scheduling and orchestration, scaling up workloads as necessary, and assuming responsibility for all the maintenance.

“Of course, containers have the possibility to transform the economics of computing through superior consolidation ratios,” said Scott Crenshaw, Rackspace’s senior vice president for strategy and product, during the Summit keynotes. “But there’s another aspect to them that might be even more transformational, and that’s the promise of instant compute. VMs take minutes to spin up, but when you have instantly available compute, you have the possibility to change the way your applications interact and engage with your users.”

Magnum Bays for Maximum Use

Carina is not a Docker development environment. It allows developers to continue using whatever they use today, especially the native Docker tools. Because the container format has effectively become standardized by the Open Container Initiative, Carina absorbs containers and follows customers’ instructions about how they are to be deployed, including their choice of bare metal or VM-based infrastructure.

In a company blog post, Adrian Otto, a Rackspace distinguished architect and co-engineer of Carina, explained what’s going on under the covers. As a co-creator of Magnum, the new OpenStack container management service released last May, Otto is intimately familiar with how the Nova scheduler recognizes host aggregates, or clusters of physical servers designed to be separated from other availability zones. Servers within these host aggregates may be a particular “flavor,” which is what (theoretically) will enable Windows hosting on OpenStack. It also allows for container hosting.

Carina2Within these host aggregate clusters, Magnum pools together Nova instances into what it calls bays. These individual bays can conceivably be orchestrated, Otto wrote, using Docker Swarm, Kubernetes, or Mesos. The beta supports Swarm today, and is being geared for Kubernetes (and should eventually expose Kubernetes’ native web dashboard, Otto told us). As Carina is built out out, he added, users will be allowed their preference of orchestration engine.

“Carina is based on OpenStack technology that we will expose progressively more over time,” Otto told The New Stack in an e-mail. “We expect to begin offering access to Carina’s OpenStack Magnum API in early 2016. At that time, it will be possible for us to offer Kubernetes as an alternative Container Orchestration Engine (COE) for users who prefer it over Docker Swarm.”

Running containers on bare metal offers the highest speed overall, but can’t be scaled easily, Otto noted, because a single server is a large increment of compute. Typical public cloud services virtualize their container platforms to offer smaller increments, but at the expense of performance.

“Carina is different. Its innovative approach offers applications access to bare metal hardware performance in increments that cost less than using virtual machines, giving you a choice of what flavor types your bays are composed of,” the architect stated in his blog post. “You can choose what Carina offers in our beta today — bare metal containers isolated by additional security techniques in the server operating system to help keep them safe from each other. In future releases, you can run your container clusters on other flavor types such as virtual machines or even full bare metal hosts for the cases where those choices make the most sense.”

Hope for the Future

While Otto’s post speaks of a future public container platform for all levels of developer skill, at least during OpenStack Summit, Rackspace’s message was not about giving DevOps professionals a wealth of choice. Instead, Rackspace is tilting Carina towards deployers who would prefer someone else handle management.

Rackspace's very distinguished architect celebrates with his father.

Rackspace’s very distinguished architect shows his father how Carina works.

To demonstrate the extent to which the Carina beta would lift the burden off of organizations’ shoulders, Otto showed his 10-year-old son Jackson using the Carina portal on his MacBook to create a managed container cluster, download the necessary TLS credentials, peruse the Linux file system for that cluster, source the Docker.env file, and use docker run to run a container on the cluster. Okay, so Jackson had a bit of coaching from Dad. (I’d need it too.)

It’s apparent that Rackspace is ready to scale Carina container workloads up or down when necessary. But who defines “necessary?” Will customers have some sort of policy- or rule-based mechanism?

We asked Adrian Otto.  (His son was apparently busy swapping out the SSDs on a server cluster.)

“We have an auto-scaling component that checks system utilization, and orders more cluster capacity automatically (additional segments) in response to that,” he wrote back. “We have the ability to look at memory utilization and CPU utilization. More sophisticated auto-scaling is possible by integrating the Rackspace Autoscale service with Carina. We will need to publish documentation that explains how to do this before we invite customers to try that.”

Pricing models for the final commercial Carina service have yet to be determined. Conceivably, the portion exposed to the public today may remain free, Otto told us, with add-on capabilities orderable a la carte for individual fees. Customers may have a say in what pricing models Rackspace eventually adopts.

“By default, Carina clusters use the public Docker Hub, so the user experience is exactly the same as using Docker on your local computer,” Otto wrote us. “Magnum offers the ability to use a private registry per bay using Docker-Distribution (Registry version 2), so if you don’t want to use the public Docker Hub, you don’t have to.”

He also stated that each Carina cluster will enable microservices to be directly integrated, via a Docker API. For early 2016, he said, Rackspace plans to release Magnum’s own API, and then offer Carina customers the choice of integrating through Docker or Magnum.

Rackspace will also offer the final, commercial Carina through its Private Cloud service, enabling the company to manage customers’ on-premises resources. This, too, appears to be planned for early 2016. By that time, one might imagine, Jackson Otto will be hosting a gaming service from the Mesos cluster in his bedroom closet.

Docker is a sponsor of The New Stack.

A newsletter digest of the week’s most important stories & analyses.