The Case for Multiple Orchestrators
HashiCorp sponsored this post.
For fans of the Lord of the Rings trilogy, it’s a memorable scene when the hobbits are first confronted with the prospect that life outside of the shire is very different — especially at breakfast. Much like Pippen to Aragorn, when we’ve recently talked with organizations about orchestrators, the conversation covers a familiar line of, “yes we have one orchestrator, but what about a second orchestrator?” It’s an interesting evolution in the market, but not a completely unexpected one. In this post, we’ll dive into some observations about why organizations are pursuing a multi-orchestrator strategy and some of the benefits that it provides.
Setting the Stage
Before diving into why organizations are incorporating multiple orchestration tools, it’s important to review orchestration. At its core, orchestrators automate the deployment of applications and management of the underlying compute resources. Containers have accelerated the needs for these tools, because of how quickly they have enabled developers to package applications. In the past, developers would provide code to server admins who would manually deploy each application on a specific long-running physical server in the office. Every time the application was updated, the system admin would need to manually update both the application code and the environment (operating system, installed packages) on the specific server.
Since development lifecycles and application packaging often took weeks or months, this was a viable solution for versioning and upgrading applications. The emergence of containers, public cloud and CI/CD tools removed this bottleneck, enabling developers to ship rapid incremental changes to their applications with portable, lightweight packaging. By adopting the public cloud, organizations could easily provision servers virtually and remotely — treating each server interchangeably as an ephemeral pool of resources. Orchestration tools like Mesos, Nomad and Kubernetes took things a step further by empowering enterprises to fully decouple applications from servers. Applications could now be deployed and bin-packed to any running server remotely from a single interface, for simplicity, redundancy and resilience.
So Why Would You Need More Than One?
As outlined above, this workflow would seem to steer organizations towards a single tool. Standard wisdom like “this is an organization-wide concern,” or “learning multiple tools can be challenging,” or “a single point of contact for troubleshooting is always preferred,” indicates that enterprises would find what works for them and move to that. In reality, though, enterprises operate a lot more like microservices than monoliths. Businesses are comprised of multiple groups of people with different tasks, infrastructure environments, technical competencies, budgets and business SLAs.
Each group will have different requirements and will ultimately leverage technologies depending on needs and competencies. It’s in this that we’ve seen a move away from standardizing on a single orchestrator. Instead, medium and large enterprises are now opting to adopt the orchestrator that makes the most sense for each business unit.
On the flip side, we continue to see small enterprises standardize on a single orchestrator given the natural staffing and organizational constraints. There are typically not enough DevOps members to maintain more than one orchestrator, not enough developers to warrant different workflows, or simply not enough workload diversity/scale to require more than one orchestrator.
What Does This Actually Look Like?
To illustrate this point, imagine two groups within the same organization. Group A is a newly formed team that is responsible for developing a state of the art, machine learning platform that ingests and analyzes large amounts of data. Group B is an existing team responsible for managing internal tools and applications that other teams regularly rely on. Both groups want to optimize their deployment lifecycle and resource utilization but have very different technology needs. Group A might strongly advocate for Kubernetes because they are planning on running Kubeflow for their ML workflows and members of the team have experience with Kubernetes. Group B is responsible for managing a series of Windows-based applications and are stretched too thin to undergo a massive refactoring to containerize these applications. They end up looking at something like HashiCorp Nomad, because it’s easier to plug into their existing environments, has Windows support and is low operational maintenance for their small team.
Group A and B might both make extremely valid arguments for why their orchestrator makes the most sense and have strong reservations about the other’s. Group A might say that, based on their testing, Kubeflow is the optimal tool for their new platform and requires Kubernetes — so it’s the best path forward. Given the company’s investment in Group A, they anticipate their staffing will grow every year to the point where they can afford to have multiple members focused on maintaining Kubernetes full-time. Group B might retort that a refactor to containerized applications for them will take a long time and impact critical applications in a way that the organization cannot afford. In both cases these groups are correct, but according to our conventional wisdom, we can only choose one. Which one do you choose? As hinted at by the premise of this whole blog, organizations are bucking the conventional wisdom and giving both groups the tool that they need. The idea is to enable orgs to operate like microservices and efficiently handle their tasks independent of one another. Each group will benefit from the increased productivity and be better off for it.
Sounds Pretty Logical, But So What?
You might be reading this and feeling like it’s pretty obvious and not so consequential. I would argue though that this is a pretty large shift for enterprises. Much like the applications that they supported, taking a monolithic approach to IT made sense for a long time. Even in circumstances where there are multiple technologies — like operating systems, infrastructure, workloads, etc. These organizations likely started with one technology, made a conscious decision to adopt a new one, and have since been slowly migrating everything to the new technology.
Think about cloud adoption as a whole. Multicloud is certainly a reality, but a lot of enterprises that are moving to the cloud select one to start. They imagine they will support others, but it’s going to be a process and they want to battle test their applications in one cloud first.
This multi-orchestrator story is different. Organizations have realized that they can adopt a number of tools based on existing needs, rather than as part of a long term strategy. Additionally, these orchestrators can co-exist for an extended period of time as long as they are meeting the needs of the business.
So now that we’ve explored this concept, what is the next logical step? I would say, try it out for yourself. Many of the orchestrator tools that I have mentioned offer open source versions, so now is the time to try them out and see which one best fits you and your organization’s needs. At HashiCorp, our tool is Nomad and we offer a number of learn tracks to help you get started. Hopefully, this article piqued some curiosity about orchestrators and removes the fear of trying out more than one at a time.
Feature image via Pixabay.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: firstname.lastname@example.org.