Modal Title
Cloud Services

The 4 Definitions of Multicloud: Part 3 — Workload Portability

Multicloud workload portability means you can push a button and move a workload from one cloud or on-premises datacenter to another.
Apr 28th, 2021 7:00am by
Featued image for: The 4 Definitions of Multicloud: Part 3 — Workload Portability
Featured image via Pixabay
This article is part of a four-part series on the four common definitions of multicloud:
Part 1: Data Portability
Part 2: Workflow Portability
Part 3: Workload Portability
Part 4: Traffic Portability

With the goal of bringing more productive discussions on this topic into focus and understanding which types of multicloud capabilities are worth pursuing, this series continues with a look at multicloud through the lens of workload portability.

Workload Portability

Armon Dadgar
Armon is co-founder and CTO of HashiCorp, where he brings his passion for distributed systems to the world of DevOps tooling and cloud infrastructure.

Multicloud workload portability means you can push a button and move a workload from one cloud or on-premises data center to another. Sort of like the idea: “Write it once, and run it anywhere.” Unfortunately, it’s very difficult to write an app once for one cloud and still be able to run that app on other clouds with no code modifications. Different vendors have different APIs, semantics, capabilities, syntax and other nuances that make workload portability, in reality, one of the most challenging forms of multicloud portability.

Achieving workload portability isn’t as simple as write once run everywhere, but it is doable. By its nature, it’s more complicated because it is a superset of data and workflow portability, meaning both of them are required in order for this type of portability to work. It is a viable strategy and depending on the business requirements, might be required. Companies may need to implement workload portability for compliance and regulatory reasons to enable failover between multiple cloud vendors.

Others might do it for cost savings. One example is a large hedge fund that used HashiCorp’s workload orchestrator, Nomad, to schedule portable workloads onto the cheapest cloud vendor and instance type each day, leveraging things like spot instance pricing.

There are three types of workload portability:

  • Full workload portability
  • Partial workload portability
  • Dataless workload portability

Some of these types may make sense for your use case if the challenges of enablement are worth the returns in cost or capability.

Enabling Full Workload Portability

Full workload portability is the most difficult type to enable. A vast majority of applications require their data and other upstream dependencies. It’s not helpful to move your web server if your database doesn’t come with it.

Full workload portability means complete migration of an application and all of its dependencies and data from one environment to another.

These dependencies include any upstream APIs that are part of processing and requests. If you have to call back to the environment that the workload has left, it often defeats the purpose of migrating it because of bandwidth cost and latency considerations.

Full workload portability is best if it’s built-in at the design stage of applications and platforms. To achieve full workload portability, internal services must be architected with the same requirements. It’s not helpful if your app can move but your upstream dependencies can’t move.

You also need to decide what type of data portability you’re going to use, and the trade-offs are the same as they were in the first article in this series:

  • Continuous replication: Replicate data across environments at regular intervals, which adds an additional operational cost continuously.
  • Break-glass portability: Transfer data across environments when migration is needed, costing a large payment at one time.

Your decision on frequency needs to be the same as your decision about how often you intend to migrate workloads, and that depends on whether you plan to port workloads frequently (in this case, use continuous replication) or on rare occasions (in this case, break-glass may work).

The final guidance is to avoid cloud proprietary services that can lock you into your environment. Although some might be feasible with the right abstractions, the more proprietary services that are involved, the harder portability becomes.

Enabling Partial Workload Portability

Workload portability starts to be more plausible in instances where certain applications don’t necessarily require having their data in the same environment. A good example of that would be stateless or frontend services. For these types of services, you can leave the data in the original environment and the application will still work.

However, there are often cost and performance penalties when you move data over the network for every request, as you would in this scenario. These include:

  • Expensive bandwidth: Bandwidth within one location is cheap, but bandwidth for connecting outside of that location is expensive.
  • High latency: The speed of light is fixed, so traffic over the network will always be slower than traffic within the same location.

Before you consider this form of workload portability, you need to answer this question — will your potential compute cost savings be negated by higher bandwidth costs and performance degradation?

Another factor to consider is that specialized architectures with complex caching and data management will expect low latency, therefore, a partial workload migration will cause diminished user experience.

To enable partial workload portability, your application has to be purpose-built to know that it’s doing constant requests over the wire. Multilayer caches and using partial or “hot” subsets of the data can mitigate some of the challenges. The operations and applications teams will have to work with deep coordination on architecture and process, which is another important challenge to consider.

Enabling Dataless Workload Portability

What if you have no data to move with your app? An example might be a stateless application or an application that has a mostly static dataset. This scenario is probably the simplest and most cost-effective use case for workload portability.

For a stateless application, there’s pretty much no cost, and for a static or rarely-changing dataset, you pay the price to move it one time and if it’s not a huge amount of data, that can be inexpensive as well.

Here are some example use cases where dataless workload portability might make sense:

  • Financial modeling applications: These applications often use a historical dataset of various markets. If you have that dataset copied in many locations and data is updated often enough, moving the application workload and integrating that dataset shouldn’t be difficult.
  • Compute-intensive, large-scale simulations: Scientific high-performance computing (HPC) tasks — like protein folding simulations — often rely on a relatively small set of data, making workload portability simpler as well.
  • Test and staging environments: Although these environments might have databases, since they have mock data or static copies, you don’t care if that data is out of sync. Testing and staging data are ephemeral by nature.

All of these examples are also great candidates for cost arbitrage, especially with spot pricing. The large hedge fund mentioned earlier saves money on its financial simulations this way.

This type of workload portability doesn’t require as much data portability, leaving you to focus on workflow portability, which can usually be accomplished with automation tools for deploying across multiple clouds and hybrid clouds. A scheduler like HashiCorp Nomad or Kubernetes can be a big help here.

When Hedging Against Lock-in Is Harmful

When people show interest in multicloud from the workload portability standpoint, their main motivation is hedging against lock-in, but as we’ve seen from the three types explained above, it’s rarely worth it.

The main challenges are:

Both data and workflow portability are required: Applications are tied down by their data and your upstream dependencies. If they can’t easily move in sync with their apps, then your workloads won’t easily move.

Your apps are limited: To build workload portability, apps need to be designed with limited access to cloud services that might lock it into one cloud. While this hedges against lock-in, you lose many high-level services that make that cloud provider useful in the first place — examples could include native logging or a particular serverless platform.

Sometimes a better strategy is to plan for workload lock-in and purpose-build your applications on the platform they’re best suited for. Building for full- or partial-workload portability can dilute the usefulness of your chosen platform to the least common denominator. It’s impractical for most use cases and organizations, and in many cases it’s hard to achieve.

Dataless workload portability is the most realistic option of the three workload portability types. It can save some money at larger scales of cloud usage when your applications just need some basic compute power without requiring any unique features from any of the cloud vendors.

The Other Definitions of Multicloud

As this series continues, read about the other three definitions of multicloud — data portability, workflow portability, and traffic portability — to understand the trade-offs and enablement patterns for each.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.