How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Cloud Native Ecosystem / Kubernetes

Want Consistent Kubernetes Experience Across Clouds? Try Cluster API

Cluster API, an official Kubernetes subproject, is a huge leap forward in consistently managing Kubernetes, as organizations orchestrate application containers across multiple clouds.
May 9th, 2022 11:11am by
Featued image for: Want Consistent Kubernetes Experience Across Clouds? Try Cluster API
Featured image via Pixabay.

Paul Jenkins
Paul is a product manager in the Oracle Cloud Infrastructure team responsible for customer adoption of the OCI Container Native Platform services. He is a speaker at many customer events and community meet-ups. Paul has a range of experience in information technology since writing his first program using coding sheets and punch cards more than three decades ago.

Toward the end of 2019, I wrote in The New Stack about how independent software vendors (ISVs) are increasingly helping their customers adopt cloud native technologies like containers and Kubernetes. A few years later, it’s clear that ISVs and their customers have truly adopted both the containerized application paradigm and a multicloud strategy — and the pace is still accelerating.

A recent Cloud Native Computing Foundation (CNCF) article found that 96% of organizations were either using or evaluating Kubernetes in 2021, and DataArt’s report “7 Trends Shaping Cloud Computing in 2022” identified multicloud as the number-one trend.

Why the increasing adoption rate? There are many reasons: Organizations are going multicloud to take advantage of different pricing models and geographic localization, as well as to avoid vendor lock-in and for redundancy in case of a service outage.

To be clear, the multicloud approach comes with some added complexity, as each cloud provider has its own security models, administrative interfaces, networking infrastructure and even resource configurations (called “shapes”). Thus, each provider requires different infrastructure knowledge and skills. For example, Amazon Web Services (AWS) has three different load balancer services. Microsoft Azure has four. Oracle Cloud Infrastructure (OCI) has two.

When you consider all of the cloud services available to ISVs and customers, the options can be overwhelming. Indeed, knowing when to use which specific cloud service — and how to best configure each of those services — is challenging enough for an organization using a single cloud provider. Navigating the combinations and permutations of multicloud can be daunting.

Can Containers Help?

What about containers? Shouldn’t containers, orchestrated by the de facto Kubernetes orchestration platform, help ISVs and organizations deploy and manage software and services across multiple clouds?

Yes and no.

Yes, because Kubernetes helps deploy applications in standardized containers, even across multiple clouds. That part is easy.

No, because Kubernetes, out of the box, doesn’t help organizations manage container clusters and cloud infrastructure. That part is hard.

Kubernetes is a complex system which relies on many components being correctly configured to have a stable cluster. The myriad differences between provider infrastructures managing the lifecycle (create, update, delete) of Kubernetes clusters adds even more complexity.

All the major cloud-service providers offer a managed Kubernetes service (such as AWS Elastic Kubernetes Service, Microsoft Azure Kubernetes Service and Oracle Kubernetes Engine) that abstracts many of the complexities of Kubernetes and associated infrastructure. This makes it easier to manage cluster lifecycles. However, when running multiple clusters across multiple clouds, the same issues of different requirements and different user experience remain.

Kubernetes Cluster API

That’s where Kubernetes Cluster API (CAPI) comes in. CAPI is a Kubernetes subproject focused on providing APIs and tooling to simplify the process of provisioning, upgrading and operating multiple Kubernetes clusters on multiple clouds — and even on premises.

CAPI provides a consistent experience and lifecycle control for Kubernetes clusters everywhere by defining a common set of operations. What’s more, CAPI provides a default implementation for each major cloud provider — and this default can usually be deployed with minimal effort. (The default implementation can be customized or even replaced entirely for organizations that have very specific requirements.)

What’s interesting is that CAPI uses Kubernetes itself to manage Kubernetes. CAPI employs a management cluster to create and manage workload clusters.

The requirements for using CAPI are straightforward. Start with a Kubernetes cluster — it could be a local cluster using Kind or Rancher Desktop, or an existing managed service. Administrators then install and initialize the CAPI for each specific cloud provider.

CAPI’s Kubernetes management cluster then generates workload clusters that will run applications deployed to them on the desired cloud services. For example, for Oracle’s cloud, CAPI will generate the virtual cloud network (VCN), subnets, security lists, internet gateway and service gateway.

Digging a bit deeper, CAPI uses custom resource definitions (CRDs) to optimize a Kubernetes installation and define the compute environment for each cloud provider. Four of the most important CRDs are Machine, MachineSet, MachineDeployment, and MachineHealthCheck:


A Machine is the declarative specification for a compute instance hosting a Kubernetes node. The specification lets administrators select the appropriate provider-specific compute shapes and features for the targeted workload. For example, lightweight workloads on OCI could use small Arm-based instances, while larger workloads could use bare metal instances with up to 128 OCPUs. These are provider-specific definitions and are not portable between providers.


A MachineSet maintains a stable of running Kubernetes clusters. A MachineSet works similarly to a core Kubernetes ReplicaSet.


A MachineDeployment provides declarative updates for Machines and MachineSets. A MachineDeployment works similarly to a core Kubernetes deployment by reconciling changes to a Machine by rolling out changes to two MachineSets, the old and the new.


A MachineHealthCheck defines the conditions under which a Kubernetes node should be considered unhealthy. Unhealthy nodes are removed by deleting the corresponding Machine, while the MachineSet ensures that a new Machine will be created to replace it.

There are, of course, additional aspects of CAPI that help manage multiple Kubernetes clusters, but the concept of Machines is core to making it possible. Abstracting the underlying infrastructure and managing Machines in a similar way to Kubernetes deployments and ReplicaSets via Kubernetes-style APIs provides a straightforward and consistent experience wherever clusters are run.

CAPI is a Kubernetes Special Interest Group (SIG) project. Currently, there are CAPI providers for more than 20 cloud services.

A Leap Forward in Managing Kubernetes

As the use of Kubernetes increases and more organizations are choosing a multicloud supplier strategy, the need for a consistent way to manage Kubernetes becomes more important. Cluster API is a huge leap forward in achieving this.

For more details on Cluster API check out the repository. You can see the cluster-api-provider-oci (CAPOCI) for Oracle Cloud Infrastructure here.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.