The recent announcement of Mesos on Windows means developers and organizations that work between Linux and Windows platforms may use their own tools without requiring heavy resource management. Those working with the Google Cloud Engine may prefer working with Kubernetes, while people accustomed to Microsoft Azure may enjoy the Mesosphere workflow pipeline. Each has their own strengths and shortcomings, though the gap between stack management services lessens as more technology is brought to other platforms.
With more stack management and container workflow tools needing to bridge the gap between those developing on Linux or OS X versus developing on Windows, there is a marked need for tools which can speak to the needs of those working cross-platform.
How Mesos Stacks Up
Mesos has a few pitfalls, the first of which is how it interacts with Google’s Kubernetes. Kubernetes is a container management system as well, offering developers a lightweight cluster management tool for working with projects that are similarly packaged. Mesos leverages more use for those working with larger data clusters, such as a Spark database. Kubernetes has been introduced to Mesos through a partnership with Mesosphere, allowing developers to use it to manage clusters that will scale to fit the needs of teams large and small.
As development teams demand more from their services, building an application using Mesos and Kubernetes are built for flexibility without sacrificing control over resource management, allowing for clusters to be scaled up, down or paused as needed.
Mesos uses Zookeeper for master election and discovery. Apache Auroa is a scheduler that runs on Mesos. Schedulers, like Kubernetes, can also run atop Mesos and share the cluster (e.g. Running Kubernetes, Storm, Spark, Hadoop, on top of the same Mesos cluster). Mesos itself is written in C++, while associated frameworks are often written in Java or Scala. For example, Chronos and Marathon both use Scala while Storm uses Java.
How Mesos works with different frameworks is a matter of some conversation as Java is a mainstay but had its rise to acceptance in a different era. Today, programming languages such as Go are increasingly popular, driving interest in platforms such as Docker and Kubernetes, which are built with Go. In one conversation on Hacker News, a user pointed out that Mesos, like anything else, has its issues, but it is useful for microservices.
If you can package your service into a Docker container, then you can launch it into the cluster and Chronos/Marathon/Mesos will take care of making sure that it’s run/running.
The same user on Hacker News asked: Is Mesos really wedded to anything? Its components are API-driven.
I apt-get install it, I run it, I send jobs to it, it works and it behaves well. Better I can poke at the APIs of any of the services to find out what is happening.
It is a debate about the resource overhead of Mesos and how it compares to Kubernetes. Mesos makes use of event-driven message passing rather than coordinating via etcd which is the case in Kubernetes. As one Mesos contributor explained, there are large scale production users that require resource efficiency due to the scale of the clusters (when there are tens of thousands of Mesos agents running in each cluster, efficiency is important for cost). :).
Zookeeper will support using etcd, Consul and other service discovery mechanism alternatives as integrations arise. There has also beem the need for using an external discovery system altogether. The critical drivers for Mesos is reliability, since the system is in operation on large clusters at Twitter, Apple, etc. Zookeeper is very reliable and “battle-hardened”, whereas etcd is a newer contender in the coordination and discovery space.
Kubernetes pods on Mesos can now leverage the power of data processing applications such as Apache Hadoop on a single cluster. However, there are issues to be aware of when running Kubernetes on Mesos. It is currently not possible to specify pod placement constraints for the kubernetes-mesos scheduler. Another current issue is that Mesos defines ports differently from Kubernetes, which can result in conflicts when a host port is not declared or has been assigned a value that is out of range.
Orphaned pods are also a concern listed in the current issue list of the GitHub repository for kubernetes-mesos. Orphaned pods are created when a Docker container will take longer to terminate than
executor_shutdown_grace_period, which results in some containers not being terminated and continuing to run indefinitely. Those wishing to work with Kubernetes on Mesos should be aware that the project is in a 1.0 release, having recently gotten out of beta. As such, bugs or hiccups could occur. If you run into a bug while working with Kubernetes on Mesos that is not on the aforementioned issue list, follow the instructions provided in the repository for reporting it.
How Kubernetes Compares
Kubernetes offers cluster management that is well-adapted to working on today’s technology stacks. Whether running CoreOS, RedHat or any of the multitude of OS’s available, Kubernetes can spin up clusters in seconds. As an open source project created by Google Cloud Platform, Kubernetes continues to be developed and shaped by the community that relies on it for daily workflow management when working with cluster-based projects. Kubernetes is written in Go, which makes it fast, lightweight and more responsive than languages such as Python.
Between pods, labels and services, Kubernetes offers a robust way to interact with clusters:
- Pods are small groups of Docker containers, able to be maintained within Kubernetes. Pods are easily deployable, resulting in less downtime when testing a build or QA debugging.
- Labels are exactly as they sound, used to organize groups of objects determined by their key:value pairs.
- Services are used for load balancing, providing a centralized name and address for a set of pods.
- Clusters on Kubernetes eliminates the need for developers to worry about physical machines, acting as lightweight VMs in their own right, each capable of handling tasks which require scalability.
Virtual machines can require large amounts of resources, especially when developing on older machines. Running hypervisors can lead to stress on a system if resources are not allocated correctly, or without a user specifying that a certain amount should be used to compile logs or complete other tasks as demanded by the project in question. Kubernetes makes the process of setting up multiple virtual clusters simpler, allowing for stack management to shed unwanted layers of software which bog down systems. Using Kubernetes for cluster management allows for high-level task monitoring, resource allocation and application scaling, whilst offering the control needed to ensure applications run smoothly.
Kubernetes is often used in application development, with software development teams able to co-locate containers together and run them alongside one another. This can be extremely useful during debugging, or leading up to production when multiple application builds must be tested for stability. Kubernetes allows pods to communicate with other pods, regardless of if they are on the same host network. This addresses an issue that larger teams can run into when launching projects at scale, as Docker only allows containers to exchange information with one another when they are located on the same host machine.
Kubernetes allows for quick-scaling, lightweight cluster management, while Mesos requires more resources. Mesos may be a better decision for larger teams with more demanding projects that are already running at scale in production. With Mesos and Kubernetes announcing earlier this year that Kubernetes is now available on Mesos, developers have the choice of which cluster management service to use when starting a project or importing existing applications.
Overall, the methods for cluster management continue to change as containers become the new development mainstay. Working with both Mesos and Kubernetes is a reality of many software developers, with teams embracing the ability to use both platforms in their development stack.