Modal Title
Kubernetes

Mesosphere Now Includes Kubernetes for Managing Clustered Containers

Apr 22nd, 2015 6:01am by
Featued image for: Mesosphere Now Includes Kubernetes for Managing Clustered Containers
Feature image via Flickr Creative Commons.

The most demonstrably effective and efficient data center scheduling and apportionment platform to date will not only support, but will actually include, Google’s system for managing clustered Linux containers. This as a result of Mesosphere, the commercial backer of the Apache Mesos project, acting on its agreement with Google reached last August.

Now, the latest preview of Mesosphere’s Data Center Operating System (DCOS) — available to early access registrants — gives developers the means for creating Mesosphere pods around applications, and then launching those applications in a large-scale pooled compute and storage environment.

Where DCOS is Going

The New Stack’s Alex Williams profiled DCOS last December, after Mesosphere raised some $36 million in venture funding. Even since that time, the marketing message of DCOS has matured, and its rougher edges have been smoothed over.

IF

Benjamin Hindman

“DCOS is a layer of software that lets us combine all the machines you have in your data center — whether they’re virtual machines or physical machines — and pool them all together, like one big computer,” explained Benjamin Hindman, Mesosphere’s co-founder and DCOS’ chief architect, during a strategy briefing with VMware on Monday. “So it’s easier to run your applications, and easier to manage your applications, and really treat the developer as a first-class entity in the data center.”

In an interview with The New Stack, Mesosphere Senior Vice President Matthew Trifiro provided further detail. At the core of DCOS is Apache Mesos, a kind of “reverse virtualization” system that pools physical resources across the data center, and separates that pool from the application with an abstraction layer. This way applications can request resources from wherever they may reside in the data center, not just locally or within physically networked clusters.

DCOS services are actually schedulers written to the Mesos API, with added extensions to DCOS, Trifiro explained. A scheduler petitions Mesos for resources, and Mesos responds by apportioning what resources are available. The scheduler then decides how it wants to utilize those resources, using what he described as a fairly sophisticated model.

“What people build on top of the DCOS in order to run an end-user application, are these services,” he said. Marathon is the name of one of these services, designed by Mesosphere to run long-running programs, such as Web servers. Marathon coordinates with Mesos for the scheduling of workloads, determining how it pieces together resources and how it advises Mesos about how best to deploy them.

“Kubernetes is analogous to Marathon,” he continued. “It is actually something that someone might want to use in addition to Marathon, or in lieu of Marathon. Kubernetes on DCOS is the entire Kubernetes scheduler talking to this low-level API, that allows it to schedule pods — these co-located containers. Part of the magic of Mesos is that it makes it possible to write all these alternative scheduling frameworks, or services.”

“Use Both”

In a blog post Wednesday, Google Product Manager Craig McLuckie described the simultaneous availability of Kubernetes and Marathon on DCOS as having one’s cake and eating it too. “Use both,” wrote McLuckie. Previously, Docker has described the ethic around interchangeable parts as “batteries included but removable.”

Of course, in the data center, systems analysts and architects choose the tools that perform best. And when the performance divide is made clear, they often tend to choose the same tools. So why would Mesosphere give DCOS users a choice if there’s a clear performance difference?

I put the question to Mesosphere’s Trifiro, who responded by stating the question implies a certain misconception.

Matthew Trifiro (300 px)

Matthew Trifiro

“When you run Kubernetes on Google’s cloud, you never say, ‘Well, am I going to run Kubernetes or Google Container Engine?’ Because Kubernetes runs on Google Container Engine,” said Trifiro, in so doing illuminating an important relationship in this new DCOS scheme.

“When you run Kubernetes on-premises, or in another cloud, or even Google Compute Engine with DCOS, you don’t ask that question,” he continued. “Kubernetes has in it some scheduling functionality at low-level, but that functionality isn’t very sophisticated, and it doesn’t do what we do. When Kubernetes wants to schedule something, it can actually be mapped to Mesos, and Mesos can do the actual scheduling of the pod. For the Kubernetes app, it doesn’t know the difference.”

Trifiro went on to explain a development environment where Kubernetes is utilized to model an application, written perhaps in Go or Python, around a manageable pod. When it comes time for that model to be expanded to data center scale, developers do not want to have to remodel it. DCOS provides a means for that pod to be integrated into a Mesos scheduling environment where Mesos provides the resources that the pod would otherwise have fetched for itself. But the pod doesn’t actually have to know the difference.

On the Other Hand

And yet, Trifiro later admitted, there is one very critical difference which pods may yet very well exploit: the availability of big data resources, which DCOS also manages at scale. And yes, you might want to consider remodeling your Kubernetes app for this.

“A modern application is doing lots of things,” said Trifiro. “For instance, if I’m running a site that’s processing real-time coupons for restaurant reservations, [I want to] serve up results to my customers based on where they’re standing, what offers are available, what they’ve purchased in the past, what preferences I’ve figured out. I don’t want to write my own big data analytics in Go. I want to use something like Spark, perhaps even Spark Streaming. Having Spark running on the same cluster, and on the same network and same machines, shared elastically with Kubernetes, means that my Kubernetes app can spawn Spark workloads to get these real-time analyses.”

Trifiro said all the code which Mesosphere wrote to integrate Kubernetes into the DCOS environment, is being contributed to the core of the Kubernetes open source project. “This really represents a significant step,” he told us, “in both of us [Mesosphere and Google] recognizing how critical our two technologies are to the new data center stack.”

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.