Cloud Native / Containers / Kubernetes / Storage

Deploy OpenStack Cinder as a Stand-Alone Storage Service

11 Jul 2017 9:00am, by

I’ve written a few posts about using the OpenStack block storage project, Cinder, for projects involving stand-alone Docker or Docker Swarm. Cinder provides block storage-as-a-service, as part of OpenStack. The main focus of these posts was to demonstrate using Cinder as a block storage service with things outside of OpenStack.

John Griffith
John is principal software engineer at NetApp, and holds the same role at SolidFire, Inc. At SolidFire, John helped to create the Cinder project in OpenStack, and serves as technical contributor to OpenStack and Open Source technologies. He served as Technical Lead for the Block Storage Project since its beginning through the Juno release, and also has held an elected seat on the OpenStack Technical Committee on and off over the past four years.

Since I published those posts, container adoption has continued to grow rapidly, and persistent data in the container space continues to see a ton of activity. There are lots of shiny, new wheels being invented in the storage-as-a-service space these days. While NetApp’s own idea of the reinvented wheel, Cinder, isn’t shiny or new — in fact, it has a few dents and dings here and there — it’s a trustworthy wheel that spins true. The best part is that those blemishes were earned through heavy and widespread adoption in production environments at scale.

Time and time again, the one piece of feedback folks have given me about using Cinder in, for example, a Kubernetes environment, as opposed to building their own storage plugin solution, is that OpenStack is just too hard to install and manage.

Rather than argue about whether that’s true or not, I want to walk you through deploying Cinder as a stand-alone storage service from source — a vendor-agnostic, platform-agnostic, and really consumer-agnostic approach. Take a look for yourself, and you can decide if it’s too difficult to deal with. This old wheel just may have something to offer. Kendall Nelson and I demonstrated this tutorial at the OpenStack Summit Boston in May, and you can watch the full demo below.

To help make things as easy and painless as possible, Cinder now includes a contrib directory with a project named block-box.  The idea behind block-box is to give you everything you need to do a super-simple, fast deployment of Standalone Cinder.

With block-box, we’re going to deploy Cinder in containers using docker-compose, while enabling the use of Cinder’s noauth option, thus eliminating the need for OpenStack Keystone. You could also easily add Keystone into the compose file along with an init script to set up endpoints. We’ll use a lot of default settings, and services like NoAuth for simplicity, so you probably won’t want to deploy this in production. Still, it’s a good foundation to build your own deployment on.

Preparing the Host Machine

We’ll configure Cinder to use LVM as its backend driver. There are currently over 80 backend storage devices supported in Cinder.  Most integrations work when the storage device is external; Cinder just manages it and interacts with it via APIs.

The LVM Driver is the reference implementation for Cinder, and it’s a little different. We bake in all the necessary LVM and iSCSI target functionality needed to just use your LVM disks as a SAN device in OpenStack. This means there are a few extra steps needed for LVM; if you’re not using LVM or you have an external backend that’s supported in Cinder, this gets even easier. We’ll go through the “hard” example though. It’s really still pretty easy, and once you come to understand it, adding other backends is a piece of cake.

Since containers are ephemeral, using them as storage devices might seem a little counter-intuitive. Actually, in this case, we’ll run LVM and then create a volume group (VG) for persistent storage on the Docker host node. We’ll then let the Cinder-LVM container access the host’s VG to create logical volumes (LV), attach targets, and share them out. This way, as the container dies, gets restarted, dies, and gets restarted again, the data is still safe. You just spawn a new container and pick up right where you left off.

Install LVM2 and Required Extras

To start, we’ll install lvm2 on our system, and then create a Volume Group for it to use. If you don’t have a disk that you can use here, loopback devices make great “disks” for testing LVM-type things. Here’s a snippet with which you can easily create a cinder-volumes VG on your node:

If you want to interact with Cinder on this machine using something other than Curl, you’ll want to install the cinderclient and brick extensions to do local-attaches. (NOTE: There’s currently a known issue with the released version of cinderclient on PyPi that prevents noauth from working, so install from source.)

Next, of course, we need Docker and docker-compose. Just in case you don’t have them already:

You may need to reboot to get access as non-sudo for Docker. (I’m not sure if there’s another way to get that on a fresh install; if you happen to know of one please give me a shout!)

The last step in our prep is to clone the Cinder repo, so we have the compose file and all the various container things we need.

Building the Container Images

We’ll start by building the required images. This Cinder repo includes a makefile to enable building of openstackloci images of Cinder. The makefile includes variables to select between platform (Debian, Ubuntu, or Centos) and also allows you to specify, from each project, the branch from which you want to build the image. We’re building from source, so there’s a good deal of flexibility in what you can do.

Your choices include building from master, stable/xyz, or patch versions. Until the build-args option for specifying base-images in build is readily available, you won’t want to rename the images until you’re ready to tinker and customize things a bit. (For more information about the image build and options, check out the openstack/loci page on GitHub.)

For now, we’re just going to build the images we need to run block-box (cinder and cinder-lvm). We specify block-box as an argument to the functions for making and getting the images we need. This takes four or five minutes, depending on the speed of your network connection.

This will run docker build on the Dockerfiles in the loci-cinder repo. We’re just pulling the latest Debian base image, installing Cinder source, then layering the stuff you need to run LVM and LVM c-vol service into a cinder-lvm image on top of that. After a few minutes. you should see a couple of openstackloci images:

Deployment

Once we have our images built, we’re ready to go. The block-box directory should have everything you need, and the default settings should work so you don’t have to mess with anything. Of course, you can go back later and modify things to suit your needs, but for now just launch it:

That’s it! You’ve just deployed stand-alone Cinder using the LVM reference implementation. We’ve even included a cinder.rc file that you can source, and with which you can run some commands. If you want to do things like local attach, you’ll need to install cinderclient and the os-brick extension that goes with it. If you’re interested, the readme file in the block-box directory has more info.

As we mentioned, you can modify this easily to use your own external driver like Ceph or SolidFire, or any one of the 80+ backend devices supported by Cinder. Just remember to adjust the etc-cinder/cinder.conf file appropriately. If you need any extra packages, you’ll need to install them into the image yourself.

Everyday Use

At this point, you may want to plug this into Docker (if you want to plug it into Kubernetes, work on that is still in progress). You now have a Cinder deployment with no-auth, so you can create, delete, and attach volumes.

You can run this on the node running the container, or another node in your deployment, just like you do with Cinder normally. It uses iSCSI and does an attach to the machine on which you ran this command (remember, you’ll need iscsi initiator installed/configured). After that, blow it away:

Adding Another Backend (c-vol)

We don’t do multi-backend in this type of environment; instead, we just add another container running the backend we want. We can easily add to the base service we’ve created using additional compose files.

The file docker-compose-add-vol-service.yml provides an example additional compose file that will create another cinder-volume service configured to run the SolidFire backend. After launching the main compose file, run the following command:

Once the services are initialized and the database is synchronized, you can add another backend by running this command:

Note that things like network settings and ports are important here!

Don’t forget, you can watch the three-minute video of our demo at the top of this article, and reach out to us through the #openstack-cinder channel on IRC if you have any questions on getting started.

The OpenStack Foundation is a sponsor of The New Stack.

Feature image: A climate-controlled self-storage unit in Utrecht, Netherlands, by Hankwang, licensed under Creative Commons 3.0.

A newsletter digest of the week’s most important stories & analyses.