IoT Edge Computing / Kubernetes

KubeCon EU: Red Hat Expands OpenShift to the Edge with Advanced Cluster Management

17 Aug 2020 11:05am, by

Just because you’re operating at the edge — in a warehouse or a cell tower on the edge of civilization — doesn’t mean you want your technology to be as rustic and remote as its surroundings. In recognition of that fact, Red Hat has released new features in Red Hat OpenShift 4.5 and Red Hat Advanced Cluster Management for Kubernetes at this week’s KubeCon + Cloud NativeCon Europe virtual conference that are “aimed at helping enterprises launch edge computing strategies built on an open hybrid cloud backbone,” according to a company statement.

First released in beta earlier this year with the launch of OpenShift 4.4, this release sees the general availability of the Advanced Cluster Management feature, which provides users with a single consistent view across the hybrid cloud and helps in the case of the highly scaled-out edge architecture. Alongside this, OpenShift 4.5 introduces the ability to combine supervisor and worker nodes, providing 3-node clusters to scale down the size of a Kubernetes deployment on the edge without compromising on capabilities — the only limit really depending on the hardware deployed at the edge. With 4.5, the ability to run virtual machine workloads on OpenShift also becomes generally available, which further extends to Kubernetes deployments on the edge.

In addition to these features, explained Nick Barcet, a senior director of technology strategy at Red Hat, in an interview, the company also placed its focus on some specific industries, given the variety seen from industry to industry when dealing with edge use cases.

“What we’ve observed is that the edge is really different per vertical and per use case, so rather than delivering a set of features that could eventually satisfy everyone, we decided to focus on two markets to start with — telco, using Kubernetes to deploy 5G, and industrial, more particularly, how to provide to the needs of factories. What we are building in both cases are complete blueprints, complete GitOps descriptions of the deployment of as many instances as you want for a given element,” said Barcet.

These blueprints, Barcet said, will dictate configurations for running Kubernetes on the edge, which he described as providing developers with the ability to treat the edge as any other part of their infrastructure.

“Edge will be an extension of what I call the ‘uber cloud.’ It’s a combination of whichever cloud footprint, data center footprint, and edge footprint a customer has. To be an extension, that means that you shouldn’t see, from a developer perspective, from a product owner perspective, any difference in your deployment when you’re deploying in the cloud or when you’re deploying at the edge,” said Barcet. “You want to encode your deployment principle once and then pick the location where you deploy your workload, and that’s all you should have to do.”

By running Kubernetes at the edge, Barcet also said they are beginning to be able to move more and more processing out to the edge, further reducing latency and moving the edge closer to the network.

“There are more and more processes that were happening at the core of the network in the telecommunications environment that are now delegated. Your response time, instead of being compounded by the latency of each node between the radio tower to the core to get to the internet, you can now access the internet directly from the radio tower and you’re immediately out onto the internet,” said Barcet.

At the same time, moving Kubernetes out to the edge introduces a new issue — that of managing the complexity. This is where the now generally available Advanced Cluster Management comes into the equation, providing a single view and the ability to manage, secure, and ensure compliance and consistency.

“ACM is adding the notion of being able to manage a fleet of clusters. To do it efficiently, you need a policy-based engine. You cannot rely on a manual selection of roles when you’re talking about scales of thousands or more,” explained Barcet.

ACM manages these edge deployments using a pull model, where the deployments check back in with the central ACM to find out if it needs a new configuration, thereby ensuring it will be updated even if it becomes disconnected at some point.

Red Hat is a sponsor of The New Stack.

Feature image by Valiphotos from Pixabay.