Cloud Native / Edge / IoT / Kubernetes / Sponsored

KubeCon EU: The Case for Bare Metal

11 May 2021 3:00am, by

The great shift to the cloud has often “clouded” the critical role that on-premises infrastructure — and more specifically — bare metal servers can play in the cloud native world.

It may seem, for example, that organizations in traditional industries that still maintain many of their IT resources on legacy infrastructure and in traditional data center environments have not been able to keep up with more agile cloud-only startups. However, this assumption remains far from the truth. For many applications and deployments, bare metal server management is a requirement for deploying on and managing a range of Kubernetes environments. These might include edge computing and database-centric computing, for example.

The viability — and in some case — the necessity of maintaining on-premises or bare metal servers versus opting to shift to a purely cloud computing-only model for cloud native environments was the subject of a number of talks and keynotes given during Cloud Native Computing Foundation KubeCon + CloudNativeCon EU that took place last week.“There’s really an opportunity for those who, perhaps, were using on-prem because they decided not to have public cloud or because they had a special reason to or often because they went to the public cloud and then found that actually, at the scale they were running, it made more sense to come back off the public cloud, and to run on-prem,” Mark Coleman, senior director of developer relations at Equinix Metal, said during his talk “Taking Bare Metal to the Clouds with Tinkerbell.”

“And what we’re seeing is that with some of the developments…there was real opportunity here for those ‘laggards’ — if you will, which I obviously don’t agree with — to become the early adopters of a new wave of cloud native innovation. They are taking advantage of a lot of tooling in the cloud native ecosystem to be able to run their own hardware at scale, without having to use a public cloud.”

Deutsche Telecom, hardly a cloud native “laggard” as a wide-scale Kubernetes adopter, has managed to rely on bare metal servers at different locations in Germany. For bare metal host provisioning in a cloud native on-premises environment, the telecom used Metal Kubed and Ironic. Once provisioned, Git and a Kubernetes API were used to manage the bare metal servers to host the Kubernetes clusters.

Access to the bare metal server infrastructure was not so much about “managing” it properly, since the project could better be described as a “self-management equation in which infrastructure takes care of itself,” Vuk Gojnic, squad lead, container and cloud native engine, Deutsche Telekom Technik, described during his keynote, “How Deutsche Telekom Technik Built Das Schiff for Sailing the Cloud Native Seas.”

“Most of the time… our work [with bare metal servers] actually focused on creating a glue for all of this in the form of layout that enabled us to cover our multi-site, -infrastructure and -cluster scenario,” Gojnic said.

The Edge Case

Bare metal selection and management are also often seen as key to optimize edge computing performance. Server hardware selection is essential, for example, when a machine learning application must process massive amounts of data from low-latency feeds from decentralized databases. In the cast of IoT, for example, a connected device on a factory floor may have a very low latency and a high-bandwidth connection, but it will not be worth much if it is unable to make data calculations or perform hard computationally intensive tasks at optimal speeds if it lacks the necessary CPU and memory performance.

“What we’re starting to see this year, in particular, is edge really becoming a thing finally so we’re seeing all sorts of companies wanting to put hardware closer to their users, whether that’s in your office, whether that’s in your store, whether it’s in your baseball stadium. There are many reasons why having a data center of type of sorts, nearer to your user would be a good idea — and of course what that means is that you’re going to be running that in a way that you perhaps wouldn’t be able to do in a public cloud, you’re going to be running your own hardware,” Coleman said. “And I think what this creates is sort of interesting: there’s this sort of idea that anyone who didn’t go to the cloud is potentially a laggard — which I don’t agree with.”

Down to the Chip

The performance of applications running on Kubernetes clusters ultimately largely depend on the CPU performance of the bare metal services and devices. This need accounts for a number of new CPU-related announcements and developments for bare metal applications running in cloud environments.

For edge devices and industrial applications, SUSE recently introduced Longhorn 1.1 distributed storage platform, which supports the ARM64 CPU architecture, designed for low-power edge device applications (Longhorn was originally created by Rancher, which  SUSE acquired). Longhorn 1.1 also extends storage capabilities for edge devices.

For Kubernetes-native distributed block storage, Longhorn “would be relevant for edge and industrial computing, where Arm is obviously very present,” Thomas Di Giacomo, chief technology and product officer, SUSE, said during his keynote “Open Source Innovation — Success Through Failure.”

Open source Tinkerbell, created to provision bare metal servers and other devices for cloud native environments, was updated in April, after becoming a CNCF Sandbox project in November. Tinkerbell was originally designed to automate the process of bare metal provisioning for bare metal servers and devices, regardless of whether they are located in data centers, public clouds or remotely for edge applications.

Equinix Metal’s Coleman explained during his keynote how the project should also benefit from a tighter collaboration with CNCF.

“We’re starting to see more and more partners from the industry coming in and adopting Tinkerbell, and having that open and clear governance through the CNCF is becoming more and more important,” Coleman said.

Moving even further down the stack, RISC-V, an open standard instruction set architecture (ISA), is extending into bare metal applications for Kubernetes. During his keynote “The Lowest Layer of the Cloud Native Landscape,” with Daniel Mangum, senior software engineer, Upbound Carlos Eduardo de Paula, cloud architect, Red Hat, demoed what he said was “the first RISC-V fully-featured computer in a PC form factor.”

Noting how heterogeneous hardware requires an open source ISA for consistent software toolchains, Mangum noted how “the days of simply deploying your workloads on new similarly priced machines, and seeing drastic improvements, could be coming to an end.”

“In short, hardware is going to become more and more heterogeneous,” Mangum said.

The developments follow bare metal initiatives for cloud native and Kubernetes among major cloud and software vendors, especially in support of edge applications. They include Amazon Web ServicesElastic Kubernetes Service (EKS) for Kubernetes management on bare metal servers in data centers, as well as on Amazon Cloud, and the widening bare metal and edge scope of Google Anthos, VMware Tanzu, Red Hat OpenShift, and as Coleman pointed out, efforts by lesser-known yet “important” vendors such as Kubermatic.

“Running your own infrastructure is sexy again,” Coleman said.

The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, LaunchDarkly, Real.

A newsletter digest of the week’s most important stories & analyses.