Mesosphere Marathon 1.0 Debuts, Supports Stateful Containers
In a move to cement a position of prominence in the data center workload orchestration market, Mesosphere officially unveiled version 1.0 of its Marathon orchestration platform.
The company also announced that it had gotten $73.5 million Series C funding round, led by none other than HPE and Microsoft. Yes, the two names once most synonymous with the old “Wintel” server architecture are backing the maker of the Data Center Operating System (DCOS) that orchestrates containers for both Linux and Windows.
But that’s not the biggest twist in this story: With the 1.0 release, Marathon will now support stateful services, enabling the persistent volumes feature introduced in Apache Mesos last July, and putting the system in head-to-head contention against up-and-coming Rancher.
“Not only can you deploy 12-factor stateless apps, but you can also deploy apps that carry state, like a MySQL or Postgres database,” said Mesosphere Senior Vice President Matt Trifiro, in an interview with The New Stack.
At Last, Persistence Pays Off
The Apache Mesos scheduler was originally conceived for the purpose of facilitating stateless services, which are highly scalable and best suited for microservices architectures. At first, Mesos was praised for what was described by statelessness proponents as an intentional design choice. But in data centers where huge data sets must be persistently maintained, it hasn’t made sense for developers to continue employing workarounds such as issuing so-called “dynamic reservations” (ensuring that a framework gets guaranteed, exclusive access to a resource it requests), or having containers write their state to publicly accessible volumes in a distributed file system prior to exiting.
Now under Mesosphere 1.0, says the company, persistent volumes will be directly supported and accounted for by DCOS and the scheduler. It turns out that managing services in this way creates a side-benefit, as Trifiro explained: “Marathon is multi-tenant, which means that different teams can deploy different apps based on their authentication and authorization, and all those apps can run multi-tenant on the same cluster.”
We do have to be careful with our terminology here because Mesos can run scheduling on multi-tenant cloud platforms. What Trifiro is saying is that Marathon can now present different pictures of the workload environment, tailored to the needs and authorizations of different tenants. This eliminates the need for data centers to stage multiple Marathon instances for separate groups, or implement artificial network isolation.
Mesosphere to Embrace Jenkins and CI/CD
Mesosphere, we also learned Thursday, plans to unveil a new and complementary component to Marathon and DCOS, called Velocity. Velocity will bring Mesosphere into the continuous integration/continuous deployment (CI/CD) space, pitting it in competition with automated deployment platforms such as DCHQ, Jenkins-aligned platforms such as CloudBees’ own, and Jenkins-independent platforms such as Shippable.
You’ve heard of the phrase “Inspired by a true story.” In Velocity’s case, it was inspired by the reality of Mesos users employing Jenkins instances as “build slaves.”
As Mesosphere chief architect (and Mesos’ co-creator) Benjamin Hindman explained to The New Stack, Velocity will serve as a kind of “Jenkins-as-a-Service,” or quite literally a sort of “ephemeral Jenkins” where staging environments are spun up and wound down as necessary for particular projects (Or “build slaves,” if you prefer a less politically correct metaphor).
“The way a lot of organizations end up setting up their CI/CDs, they basically deploy a Jenkins per team,” said Hindman. “In fact, in some organizations, they deploy a Jenkins per developer. And when they do that, they have all these tiny little clusters that are running in their data centers, in an extremely statically partitioned, pain-in-the-butt way from a management perspective.”
“What that means is if you’ve got three nodes that were provisioned for one organization, and another three nodes for a different organization, and so on, at any point in time, when one organization has a ton of jobs that are queued up to be run, and another part has no jobs, unfortunately, you can’t leverage those other resources that were previously provisioned,” Hindman elaborated.
This deployment scheme did have the advantage, however, of supplying the necessary network isolation between organizations and their users. As Hindman continued, Velocity, running on DCOS, will pro-actively provision the resources necessary for each Jenkins instance in all organizations sharing the data center, at any point in time. And DCOS will maintain service partitioning between users, giving them the continued appearance of resource isolation, even though their instances may be sharing the same multi-tenant cluster.
Hindman also said that his team has built a deploy module for Jenkins that communicates with Marathon, enabling a direct interface between testing and staging environments and production environments.
With Velocity’s forthcoming integration with git, he explained, there’s a new Jenkins plug-in that will recognize each push request. That request triggers a build, which could conceivably pull dependent components from JFrog’s Artifactory or push built components into Artifactory. The built component is then pushed into Marathon, which can then be instructed to deploy containers based on that component using a blue/green strategy, where old and new versions co-exist for a time.
“It won’t require a user, at any point of the procedure, to need to go kick anything off,” said Hindman. “It can be completely kicked off by particular branches or repositories in git.”
We are no longer looking at, or talking about, the alternative stack or the experimental stack when we refer to container orchestration and continuous integration at scale. Mesos-style and Kubernetes-style scheduling are now central tenets of our data centers, and this latest round of investment from Microsoft and HPE stands as testament.