A community edition of Mesosphere DCOS, the first cluster-based, container-oriented operating system, is being made available to customers for deployment on Amazon’s AWS platform, Mesosphere executives confirmed to The New Stack. This, as a commercial edition of Mesosphere for on-premises and hybrid cloud deployment, exits public beta and enters general availability.
“There are no limits to it. It’s a very robust package,” said Mesosphere Senior Vice President Matthew Trifiro, “and it’s available for free. We’re also accepting early access applications to Google Cloud Platform and Microsoft Azure.”
Mesosphere makes use of the Apache Mesos scheduler to orchestrate the distribution of related and unrelated processes over clusters of disparate servers. With this first general release of the community edition, this distribution will take place over Amazon’s servers, although technically the system was designed to extend container-based and microservices deployments across clouds. Once the Google and Azure licenses become generally available, conceivably all three public clouds can be leveraged by a single cluster scheduler simultaneously.
The enterprise edition will support all three cloud providers out of the box, said Trifiro, as well as on-premises environments using any major Linux distribution. “It’ll run in a virtualized environment on VMware or OpenStack, and it’ll also run on bare metal,” he added.
Hiking Up the Utilization Curve
Kerberos security will be offered for the first time in the enterprise edition, said Mesosphere CEO Florian Leibert. This will add authentication and identity. It will also enable DCOS to enforce policies that limit access to applications and resources to particular users.
Leibert explained that in extremely complex deployments, the final GA editions of DCOS will prove its worth by driving up utilization rates through the use of advanced scheduling algorithms. Those algorithms, Leibert said, were picked up through Mesosphere’s acquisition last September of the services of Standard University Professor Christos Kozyrakis. While at Stanford, Prof. Kozyrakis had led its Multiscale Architecture and Systems Team, which Trifiro said paved the route ahead for Mesosphere, particularly with respect to its ability to reduce resource consumption at scale to a minimum number of nodes.
“In a typical data center today, all its different clusters will get maybe 8 to 15 percent utilization, average,” remarked Trifiro. “Woefully low. Maybe 80, 85 percent of your servers are going completely wasted, because you’ve over-provisioned them for peak load. And any excess capacity isn’t utilized, because everything’s sitting in a static partition.”
That problem is solved in DCOS, he said, through the use of scheduling and allocation algorithms derived in part from Kozyrakis’ work, making optimum use of available resources. When a workload has a required SLA objective, the availability of resources when it needs them must be assured, he said, while other workloads in a cluster may be addressed on a lower-priority, best-effort basis — for instance, an analytics job expected to consume 24 hours anyway.
With DCOS, a lower-priority process can be assigned to run on slack capacity when it becomes available throughout the day, and also paused whenever a higher-priority workload needs to meet SLA commitments, Trifiro explained. The concept is called “over-subscription” (which, inevitably, some system builder somewhere will compare to overclocking). “It allows you to run more workload on your cluster than, theoretically, your cluster can handle given its guarantees.
“Because our scheduler and allocator know what’s actually being used in real-time,” he continued, “any slack capacity can be used for these best-effort workloads.” He said he has seen utilization for DCOS-scheduled clusters triple over conventional workload allocation, although adding over-subscription capability could drive typical utilization from below 15 percent all the way up over 90 percent.
“We have one customer who runs regularly at 95 percent utilization,” he noted.
Over the past year, Prof. Kozyrakis had developed a machine learning algorithm that uses historical performance data on running processes and workloads, in order to predict their future behaviors and usage patterns, Leibert said.. This is in addition to the “weighted dominant resource fairness” algorithms that the underlying Mesos layer uses.
“A batch analytics workload is probably never a high priority,” he said, “as something that is user-facing, such as your Web tier. We would assign different weights to both of them, which would mean that, if push came to shove and you had to eject one workload because you’re just out of capacity and you’re actually under-provisioned … you want to have some sort of action plan. In our case, that action plan boils down to a simple assignment of weights to different classes of applications, so you can shut off your analytics cluster in order to start up more of your Web tier. You can always run the analytics later on.”
Filling the Security Gaps
Leibert offered a use case for the added Kerberos security layer:
Suppose the customer is a hedge fund whose offices run proprietary trading algorithms. A public cloud or hybrid deployment could put the sanctity of those algorithms at risk.
With a clustered operating system, the theoretical capability exists for a server outside the cluster to appear to have joined the cluster and expose the data structures behind the algorithms. From there, it may be trivial for a smart developer to work the formula backwards to extract the algorithms themselves.
“So what we allow you to do, when you need to increase the level of security, is plug in with Kerberos, where you have basically a register of all of the available machines,” Leibert explained to us. “All of them get equipped with a secret and they’re allowed to authorize themselves. Now, DCOS will only allow your individual computer to join the ensemble if it’s authorized to.”
The CEO then shared another situation he’s seen personally, where by default, Hadoop clusters simply allow the addition of nodes without the least bit of skepticism about who or what is doing the adding. Hadoop does not, in other words, secure by default — for whatever crazy reason. So Mesosphere effectively re-inserts the capability for restricted access into the Hadoop process.
“You can now authorize who can run which application, or who can even be part of the DCOS cluster,” Leibert said.
In order to properly automate scheduling decisions, explained Matt Trifiro, an orchestrator must weigh these and several other attributes each time, on a case-by-case basis: for example, power consumption limits, existing relative utilization levels of hardware, preferred localities for workloads and their data, preferences for hardware that uses accelerator chips, preferred proximity to local data, and ascertained security levels for data centers participating in clusters. These decisions, he remarked, are best left to someone other than humans.
“Humans are actually very poor,” Trifiro said, “at reasoning at scale and at real-time.”
Feature image via Flickr Creative Commons.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Real, Bit.