Containers / Open Source

Latest Ubuntu Adds LXD 0.7 Hypervisor, Rendering Desktops an Endangered Species

23 Apr 2015 11:56am, by

The ability to run Linux containers in a server context on a Linux client system, in itself, doesn’t sound like anything to get excited about, unless you’re a retired English teacher longing for sentences to proofread. So let’s try this: Assume a hypervisor layer could allow desktop operating systems to host containers the way a server does. That includes live migration — the ability to pass active workloads between processors.

A client system could grab an active, perhaps even running, workload directly from a network hub over an HTTPS connection, and begin running it without the need for installation. If Canonical would concentrate on promoting this ideal, using any common language spoken amongst human beings on the planet Earth besides a variant of Linux, it might just disrupt something.

Canonical’s Ubuntu project for live workload distribution, announced last November and running full-steam since then, is called LXD. It becomes a formal part of Ubuntu Linux with version 15.04, which goes live today.

Lucy in the X with Diamonds

“LXD” is the most unfortunate name ever attached to a piece of software, which is why Ubuntu does not pronounce it the way it appears. Although its predecessor and compatriot, LXC, is clearly pronounced “el-eks-see,” LXD is pronounced “lex-dee.”

All reminiscences of Timothy Leary aside, LXD is a client-side hypervisor intended to run Linux containers, including those intended for servers, with the types of security restrictions and kernel isolation normally associated with a typical VM hypervisor. Its goal is to make container-based workloads portable across servers and clients, while adding some of the flexibility that VM users have come to expect: for example, snapshotting a running container in progress.

“The concept is relatively simple,” announced leading Ubuntu architect and LXC/LXD co-creator Stéphane Graber, prior to introducing a concept that is ultimately complex. “It’s a daemon exporting an authenticated REST API both locally over a Unix socket and over the network using HTTPS. There are then two clients for this daemon: One is an OpenStack plugin, the other a standalone command line tool.”

As it turns out, LXC will be the command line tool, in what will be a significant upgrade for what many Linux enthusiasts declare to have been the first Linux container virtualization environment. (Indeed, the first Docker versions utilized LXC as their execution environment, before Docker dropped its dependency on LXC in version 0.9.) But while earlier versions of LXC involved users creating templates for containers that created their root file systems, LXD relies on the fact that modern containers are comprised of images that outmode the use of templates.

“LXD is our opportunity to start fresh,” wrote Graber in a post to his personal blog Tuesday. “We’re keeping LXC as the great low-level container manager that it is. And build [-ing] LXD on top of it, using LXC’s API to do all the low-level work. That achieves the best of both worlds: We keep our low-level container manager with its API and bindings, but skip using its tools and templates, instead replacing [them with] the new experience that LXD provides.”

Translation Matrix

The distribution mechanism for LXD containers is the jewel in the crown of this story. I’ll explain it first for you in Linux-speak, then translate it into English. (If I succeed, a bright future may await me at the BBC World Service.)

You launch the LXD daemon from the LXC command line. From there, you use the remote command to declare the location of various Linux containers, which can be listed as a sort of rudimentary catalog. With that catalog, you use the launch command to select an image and run it locally. To spin up a local shell to execute commands inside the container, just as though you were issuing a remote command on a network, you use the familiar exec command.

And now, the main points of the news again, in English.

You remember desktops? The whole point of desktops was to give you access to the things you’ve installed locally — your programs, and your documents. Well, assuming everything you need to run can be run from a container, and everything you need to store can be stored in a cloud, who needs desktops?

And there is the real headline: LXD could completely transform a desktop client operating system into something more like a custom-controlled crane for containers, lifting them from their storage hub, dropping them into place, and running them. “Installation” becomes irrelevant. What’s more, it actually becomes more convenient for personal storage to be addressable using network protocols than local addresses.

Perhaps the “D” stands for “disruption.”

What the “D” does not stand for — at least, not yet, is “Docker,” despite Ubuntu flying the bright blue whale banner. As Ubuntu’s marketing page for LXD reads, “For the most efficient way to deliver your binaries to a platform for execution, Docker is the dance for us.”

Apparently that dance was a belly-flop, akin perhaps to something attempted by Steve Wozniak on “Dancing with the Stars.”

Stéphane Graber makes it clear that while Docker may have originally been built for LXC, LXD is somewhat distinct from Docker. “The focus of LXD is on system containers,” he writes. “That is, a container which runs a clean copy of a Linux distribution or a full appliance. From a design perspective, LXD doesn’t care about what’s running in the container. That’s very different from Docker or Rocket, which are application container managers (as opposed to system container managers) and so focus on distributing apps as containers, and so very much care about what runs inside the container.”

If Canonical can see past this minor discrepancy and learn to speak everyone else’s language, it has the potential to instill general-purpose, common-sense relevance for Linux client operating systems for perhaps the first time this decade.

Feature image via Flickr Creative Commons.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.