The Five Styles of Workload Orchestration

5 Jul 2016 7:52am, by

Workload orchestration is what distinguishes the modern way of providing digital services from how it was done even five years ago. Many have asked, what’s so new about this stack compared with all the others? The answer is the orchestration of workloads. We can now think of the functions our servers perform and the services we provide to customers as workloads, rather than applications with brands, or virtual machines with golden masters and overwrite protection.

Key Forces that Enabled Orchestration

The phenomenon of orchestration came about through the confluence of several evolutionary industry trends:

  • Virtualization unshackles programs from the hardware of the servers that ran them.
  • Web services enable communication between program functions, without the necessity of middleware.
  • Data stores free up huge pools of data from having to be processed by using an itinerary specific to any one application, or any number of them.
  • Cloud dynamics makes it possible for the resources required by a program to be provisioned on-demand, at precise increments.
  • Containerization shrink-wraps programs, and the few dependencies they require, into tight bundles that can be hosted by their own operating environment.
  • Software-defined networking and storage make the environment in which workloads are provisioned and deployed a plastic, pliable, fault-tolerant mesh that’s adaptable to the demands of the jobs at hand.

These six forces brought about the new reality of truly distributed computing. What’s shocking is that they didn’t conspire to do so intentionally. All six of these information sciences were created to serve their unique purposes. But once they were brought together in the data center, they produced a completely new opportunity to build and manage systems exclusively and entirely around the work that can be done.

All this being said, orchestrated workloads must coexist with traditional applications, and even legacy software. Organizations seeking to implement workload orchestration within their data centers must develop a strategy for the old and new systems to work together with reasonable efficiency, no loss in productivity, and no degradation in security.

Five Styles of Workload Orchestration

Configuration management, continuous integration/continuous deployment (CI/CD) and on-demand cloud service provisioning play roles in modern enterprises to varying degrees. As a result, organizations may adopt different orchestration strategies, and may even employ multiple strategies for different roles and separate divisions.

1. The Docker-Based System

Docker’s native orchestration system, from the vantage point of usability, is not complex at all. The system is command-line based, and Linux command lines are familiar to both developers and administrators. But its underlying assumption is that it’s being used by developers in the course of staging their work.

The part of Docker that runs the containers themselves is called a daemon. The component that includes this daemon is called Docker Engine. Its purpose is to run containers within the environment currently set up for it. By default, that’s the typical memory address space of one machine (physical or virtual).

The distribution part of the equation enters in with Docker Swarm (which Docker recently incorporated into the core Docker engine). It collects multiple address spaces together into a central cluster, performing an act of “reverse virtualization” that abstracts the complexity of that cluster from Docker Engine. As a result, the engine perceives the space created by Swarm as just a very large pool, or a huge computer.

A third component, Docker Compose, introduces one or two elements from software configuration management (SCM). A text file, called the Dockerfile, uses YAML code to describe the requirements of the application hosted in the container. This file is spun “up” to the Docker Engine, which executes commands based on the Dockerfile’s contents, to produce a fully-active running environment.

Companies in the business of producing orchestration systems might debate whether these tools constitute a true method for orchestration. However, the commands used by Compose to start, monitor, stop, and otherwise access running services within containers are regular enough to be automated. By that standard, orchestration is indeed possible. A large focus for us in this ebook has been what companies and users determine to be orchestration, and we broke down what our survey respondents said.

This chart shows what functionality end users expect from container orchestration and Containers as a Service offerings. For CaaS, the relatively informed respondents often expect both orchestration of more than just containers.

2. Scheduling Multitenant, Microservices Platforms

Some of the most recognized orchestration platforms today are the ones that enable microservices. One of the first organizations to attempt such an architecture was Google, which built an orchestration system called Borg. Its principles were passed on to an open source project called Kubernetes, which Google marshals.

Like Swarm, Kubernetes maintains a pooled, distributed computing environment made up of a cluster of servers, each of which (virtual or physical) is considered a node. Within these nodes, Kubernetes recognizes groups of containers called pods, which may share resources but also run simultaneously.

Pods play a significant role in establishing strategies for distributing workloads. In a microservices-oriented system, individual services and functions may be utilized by more than one application simultaneously. Dividing functions with similar resource needs into pods enables a kind of management scheme where functions within a pod re-provision their resources based on the demands of the applications running at the time.

This changes the function of load balancing, which began as a matter of replicating servers and became one of the replicating applications. Now, applications don’t have to scale as a whole just because their functions do.

Another open source tool widely used for pooling together container resources and scheduling the execution of workloads on that pool is Apache Mesos. Mesosphere produces a commercial orchestration environment that it describes as a “Datacenter Operating System” (DCOS), including a real-time visual representation of multiple workloads running on a cluster of servers. Marathon, one of the tools that Mesosphere produces for scheduling container workloads on its DCOS platform, was originally offered as an alternative to Kubernetes. Today, the company includes both orchestration systems with DCOS and gives customers the choice of using either or both in real-world use cases.

3. Jenkins-Integrated Change Control Platforms

Unto itself, Jenkins is not a container orchestrator. It’s an open source CI/CD platform that automates the processes of development, testing and staging before deploying applications in production. More to the point, Jenkins is intended to regularize the work that people need to do in bringing new software to fruition. In Jenkins, stages of work that can be automated and scripted are called pipelines.

Originally, workloads in Jenkins and other CI/CD systems were encapsulated as virtual machines. As organizations began adopting Docker, it soon became clear that the human work processes associated with containers did not “map” one-to-one with the maintenance of VMs. CloudBees, which produces the commercial version of Jenkins, manufactures a plugin called Workflow that redefines Jenkins’ notion of continuous delivery to be more open to containerization. A companion plugin to Workflow is then added to Docker, which enables entry points for pipelines using scripts written in the Groovy language — scripts that are more commensurate with the types of processes that container developers perform daily.

It’s important to note here that nothing about the nature of staging workloads in this way, using CloudBees Jenkins, is incompatible with the typical use cases for Kubernetes, CoreOSTectonic (a commercial Kubernetes distribution) and Mesosphere/Marathon. While CI/CD pipelining focuses on automating the development process using containers, these other orchestration platforms focus on managing deployment and regular use using containers.

However, within enterprises today, adhering to the ideal of CI/CD runs hand-in-hand with subscribing to the mandates of CM. Jenkins is often integrated with CM tools such as Chef, Puppet, Salt and Ansible. While vendors of these SCM tools portray themselves as compatible with orchestration tools, in practice today, many enterprises perceive them as duplicative and don’t use them together.

One alternative orchestrator designed to be compatible at the outset with Jenkins and its pipelined workflows is Docker Cloud. It presents workflow automation in a very straightforward, Amazon-like style that IT administrators may find comfortable.

4. Software Configuration Management-Agnostic Integration Platforms

In almost every software industry there is some platform that presents itself on the virtues of being “end-to-end.” For decades, there have been “lifecycle management” tools, and many SCM packages have been marketed as such, for better or for worse. But the fact that such platforms are often used together, simultaneously, speaks to the undiscovered fact that “ends,” within organizations, tend to move.

Shippable produces a cloud-based continuous integration platform that integrates with Docker, but which avoids Jenkins. It also bypasses typical configuration management, by how it deploys container images. Choosing to perceive containers as persistent rather than transient, the Shippable approach is to monitor the actual resources used by containers and to adjust their configurations accordingly. Thus, rather than automating configuration by way of scripts, Shippable automates configuration by dynamic analysis.

5. Microsegmented, Hybrid VM Platforms

Because containerization platforms have no place for traditional virtual machines, it was inevitable that VMware would directly address the demands of Docker users. The company devised a kind of “embrace and extend” policy that involves wrapping Docker containers in an envelope called “jeVM,” such that its existing vSphere environment will accept it just like any other VM.

By including the company’s Photon operating system (OS), containers can become hosted by VMware hypervisors instead of a Linux kernel. Orchestration in this scheme becomes a matter of staging container instances, like VMs, in the existing environment. While this disables any prospect for microservices as per the current model, this mode does have the advantage of being instantly integrated with an enterprise’s existing CM and CI/CD platforms. However, the relative scalability of container-based applications under this model has yet to be proven.

CoreOSDockerMesosphere, and VMware are sponsors of The New Stack.

Feature image via Pixabay.

This post is part of a larger story we're telling about the state of the container ecosystem

Get the Full Story in the Ebook

Get the Full Story in the Ebook

View / Add Comments