Multicloud Challenges and Solutions
Mirantis sponsored this post.
The world of infrastructure has changed dramatically over the last ten years, with more organizations distributing their workloads over multiple platforms — both on-premises and in the cloud. This is leading to a change in the way we manage workloads, which comes with an increase in complexity and risk. The distribution of workloads across the multiple platforms is referred to in many different ways, with multicloud and hybrid cloud being the most common.
Multicloud, at its simplest, is about deploying an application’s components across two or more cloud infrastructure platforms. The platforms could be two public cloud service providers, or two private clouds, or some combination thereof. Hybrid cloud is much the same, except it always refers to the combination of public and private cloud.
Multicloud and hybrid cloud application design patterns can take many forms, but two are paramount:
Components hosted on different clouds — The most common and simplest model involves separating the components (application layers) so that each distinct component is deployed on a single provider, with the whole application spread across multiple clouds. For example, the application’s front end might reside on a public cloud, its middleware on a private cloud, and its database on an on-premises bare-metal cluster.
To elaborate: this example might involve a heavily-trafficked frontend-centric web application, perhaps frequently updated, that makes sparing calls to backend resources. Mounting the application frontend on the public cloud enables rapid, dynamic scaling of this resource in response to traffic, and can simplify temporary (but resource-intensive) procedures like Blue/Green deployments of frequent new releases. Putting middleware on a private cloud enables similar, but more constrained flexibility, with tighter security. Running the database on bare metal provides the highest tunability and performance, while offering maximum protection for critical and/or regulated data.
Single components, distributed across multiple clouds — Less often, we take a single application component and spread this across multiple clouds. The challenge with this model is that we now introduce issues like latency and potentially other networking challenges within a single application component.
For example, as organizations scale up use of public cloud services and seek to cost-optimize, they often encounter situations where the resources they need are unavailable (e.g., region nearing capacity, no “spot instances” of the desired type are available, etc.). In such cases, technologies like Kubernetes federation can be used to enable container workloads — even, in principle, peer microservices scaling out horizontally to perform a single application function — to “jump the gap” between public clouds. Writing microservices and applications that thrive on such an architecture, however, means anticipating a range of latency and other conditions not often encountered by apps running on a single infrastructure.
Helping developers more easily consume resources and services from multiple cloud providers provides a number of advantages, including the following.
- Leverage — You want to be able to have some leverage over your suppliers, in order to be able to negotiate the best possible rates and ensure the best possible service levels. If you are locked into a single supplier (or if there is a monopoly), you lose that leverage. You are vulnerable to costs rising and service levels declining.
- Price/Performance Efficiency — Ability to access multiple public clouds lets you continually optimize price/performance — not just for workload hosting, but for all the other performance factors and costs associated with serving an application (for example network egress costs, interconnectivity, latency). But maximizing your freedom to cost- and performance-optimize, by moving components and workloads among providers and infrastructures, means limiting your dependence on the strongly-differentiated features and services of the platforms and providers you use. Kubernetes and containers can play important roles here (more below), forming a consistent substrate spanning multiple clouds and infrastructures.
- Risk Mitigation — Expanding on the above theme, you need to be able to move your eggs to another basket easily. Cloud provider pricing is complex, hard to observe and predict, and can change with little notice. Services can be retired. Provider policies can change, too — and providers can be capricious about enforcement, with Terms of Service agreements leaving customers with little recourse in the event of disputes. So, it makes abundant good sense to plan ahead, provide redundancy, and ensure that critical databases and other hard-to-move components aren’t locked to specific providers.
- Location — One critical service that public clouds provide is the ability to put workloads and data in specific regions. The ability to exploit location enables access to lucrative markets — it’s critical to application performance (e.g., minimizing latency), cost of storage and transport, and (in some cases) to availability of specific services and scale.
- Regulatory compliance options — Control over workload and data location (both data at rest and data in motion) is also crucial to implementing a jurisdictional strategy enabling regulatory compliance, data sovereignty, and data protection. The ability to conform to jurisdictional and customer requirements aligned with GDPR, Privacy Shield and other regulations is table stakes for organizations seeking to serve global markets.
Strategy is required to ensure that multicloud delivers on benefits without creating extra challenges for your developers, DevOps and operations teams.
- Consistency is critical. By ensuring a consistent application platform across private and public clouds, you can help ensure that applications will be able to run anywhere without changes; and that configurations, operational automation, CI/CD and other ancillary codebases can be maintained in singular channels. Kubernetes is currently the best available platform for terraforming public and private cloud infrastructures, as well as bare metal — providing a host of abstraction mechanisms for insulating workloads from the underlying infrastructure, keeping them alive despite underlying infrastructure issues, and permitting rapid, efficient, low-impact application updates, scaling, and lifecycle management.
- But Kubernetes alone isn’t enough — organizations need consistent Kubernetes provided on any infrastructure as easily-customized, easily-scaled, fully-observable, batteries-included (but not overly opinionated), secure, universally-compatible, and operator-friendly application environments, provided from a central source. A single cluster model speeds operations, enables container, config and automation portability (see above) while also improving security (eliminating unknowns and variations, thus reducing attack surface), facilitating policy management and simplifying regulatory compliance.
- Using a centrally-administered system to deliver, update and manage clusters across your multicloud also opens the door to big productivity gains; a single pane of glass for observability and manual operations, fully-automatic and non-disruptive updates, a single set of APIs for building self-service applications and delivering clusters on-demand (wherever you need them). Manipulating diverse public- and private-cloud infrastructures via “provider” middleware, the central command-and-control facility, helps ensure that you gain the benefit of platform- and public-cloud-specific services, while also enforcing the consistent configuration and behavior of the Kubernetes clusters on which your applications run.
- Freedom of choice is consistent with this model. A centrally-administered multicloud infrastructure should leave your operators and developers free to choose among public and private cloud alternatives, while also supporting use of a range of operating systems, and a host of automation, CI/CD, security and other tools.
- Centralized monitoring and capacity management is also something that is important, to ensure that you have a clear understanding of how your systems are performing and what resources they are consuming, so you can make good decisions on where you should be running your applications.
- Also high up on the list of core requirements should be ease of use. If the systems are overly complex to use or require developers to have to learn to deal with new or strange systems, it will massively hinder adoption.
What You May Give up by Choosing Multicloud
Of course, there are some downsides to choosing a multicloud strategy and ensuring the use of a common platform to deploy and manage consistent Kubernetes (and potentially, applications running on top of Kubernetes) across multiple platforms. Chief among these is that you may not be able to (directly) utilize all those cool add-on services offered by public (and private) cloud providers, including their versions of “one-click Kubernetes.”
It’s hard to argue against convenience and no-cost/low-cost/low-friction start-up. But not impossible. Consider the following.
- The effort required for an individual to trial a public-cloud-hosted Kubernetes solution isn’t representative of the effort required from an organization intending to deliver multiple dev, test and production clusters of the same type, at scale. The latter lift is much, much bigger — dealing with the platform’s take on identity and access management, populating new clusters with appropriate groups and users, managing (and cost-optimizing) a fast-growing flock of Kubernetes clusters (plus their associated networks and ancillary service configurations) spread, potentially across multiple provider regions. And consider that all of this is potentially flavored differently from conceptually-similar but code-incompatible setups on other providers and platforms. Take-away: in the real world, at real scales, this model isn’t multicloud-friendly. It only works elegantly if you buy in deeply to a single public or private cloud platform.
- Depending on the centralized deploy/update/operate/observe solution you select, the friction of delivering and managing clusters at scale across multiple providers and platforms can start very low, and (via simple self-service automation running against the solution’s API, for example) be brought nearly to zero. You can have “one-click clusters” this way, too, even for production.
- Again, depending on the centralized solution you select, you should be getting indirect benefits from many core public cloud services. This is because your solution vendor has engineered (and will continue updating) their provider-specific configuration and deployment middleware to make optimal use of each public cloud provider’s service portfolio, where that makes sense. The difference is that you don’t need to figure out how to use (and automate) AWS Route53 vs. Azure DNS vs. OpenStack Designate vs. VMware etc. around a Kubernetes cluster and its ingress, to get production clusters up on multiple platforms.
- Cloud provider services remain fully accessible under a centralized Kubernetes regime, and can be used freely. You can have your AWS Lambda functions and centralized, multicloud Kubernetes administration, too.
The most important take-away: low-friction-of-entry services (including Kubernetes offerings) seem to make start-up friction-free. But the more you invest, and the deeper you dig into provider service portfolios without the abstraction and mediation provided by a centralized solution, the more locked-in you become. Getting to multicloud means reduplicating (with differences) the effort of getting started at enterprise-scale on each provider, and maintaining all the parallel channels of tooling you create to do so. So “lifting and shifting” any part of your operations and business from one provider to another becomes a many-layered challenge.
Mirantis sponsored this post.
Feature image via Pixabay.