Top Stories /

Edge Computing and the Cloud-Native Ecosystem

18 Apr 2018 10:28am, by

Low latency, reduced bandwidth, reduced backhaul — these are the axioms of edge computing, the process of moving intensive workloads from the cloud out to the edge of the network. Finally, in 2018, use cases from mobility and IoT to video and machine learning are converging around the need to process lots of data closer to end-devices.

But as a new industry forms around edge, as Service Providers pilot 5G edge platforms, and as different industries work with different definitions of “the edge,” it is a challenge to converge towards one vision for what a real-world edge platform looks like. This article presents one concept for an edge platform that relies on open source, cloud-native technologies.

Edge Computing as Multicluster Orchestration

Diagram: edge orchestration at a glance

Megan O’Keefe, Software Engineer at Cisco
Megan is a Systems Engineer at Cisco, where she works on a rapid-prototyping team dedicated to innovation in cloud, video, and data center technologies. Megan has spent the last year immersed in edge computing, where she has worked with Service Providers to help prototype a new platform for cloud-native edge applications.

One way to think about edge computing is as an extension of the cloud, and most key edge use cases assume some cloud involvement. For instance, an enterprise might train a machine learning model in the cloud, then serve the latest model at the edge. It follows that an edge platform should be workload-agnostic, like the cloud, and leverage existing cloud platform technologies such as Kubernetes to ensure consistency between cloud and edge application deployments. One edge microdatacenter — a group of servers placed between cloud and end-device — would correspond to one Kubernetes cluster.

Using Kubernetes at the edge is advantageous because it supports different kinds of workloads, including containers, functions, and virtual machines. But simply installing Kubernetes on to thousands of these microdatacenters does not solve the set of unique technical challenges for edge computing. For instance, there is the issue of how to bootstrap these edge devices at scale, then install Kubernetes and platform tooling across all the sites.

Then, there must be a way for application developers to deploy different workloads out to many edge clusters at once. There should also be a way for developers to set up implicit deployments (“put my application where the traffic is”), without having to worry about which of the thousands of edge microdatacenters actually run the application. There is also the challenge of load balancing traffic across clusters, so that each request resolves to the closest edge server. Edge clusters should also be able to autonomously scale workloads across clusters; this requires some “neighbor awareness” between edge sites. An edge operator may also want to organize these microdatacenters as a complex topology, with different workloads deployed at the regional and local levels.

Finally, managing edge clusters that span a vast geographic area poses a set of logistical challenges not unlike managing lots of IoT devices. For example, there are physical security issues, heterogeneous hardware and variable network setups to worry about.

Towards an End-to-End Edge Platform

Building out an edge platform to address all these technical requirements is non-trivial, and nobody has built a magical, unified edge platform because it is difficult.  But I argue that uniting the edge platform ecosystem under some common set of open source tools is essential to accelerating innovation around edge.

Diagram: edge device, platform, and application management

So what might this edge platform look like?

First, it is essential that each component — device, platform, and application management — taps into the same edge inventory. For instance, a bare-metal provisioning tool like RackHD could pull a set of MAC addresses from the edge inventory, remote-boot those devices, then write some results back into the inventory. Then, a platform manager could take those results from the edge inventory, along with some user-defined information about how to cluster the devices, then install Kubernetes, along with other platform tooling such as Fluentd and Prometheus for logging and monitoring. Once all the edge sites are clustered and ready to accept new workloads, there must exist some logical layer on top of these edge Kubernetes clusters, to handle implicit app deployment and cross-cluster load balancing.

The first two components, device and platform management, are largely solved problems that could be written around existing tools. But the last component, application management, is an unsolved problem: how could hundreds of thousands of Kubernetes clusters work together with minimal oversight?

The Next Hurdle: Edge Application Management

Edge computing is just one use case for multicluster Kubernetes, and the concept of managing multiple Kubernetes clusters is not new. In 2016, Kubernetes introduced Cluster Federation, a control plane for multicluster workload deployment and load balancing. Since then, the multicluster special-interest group within Kubernetes has moved away from a centralized control plane approach, and towards a more disaggregated set of APIs, ingress controllers and tools. All these tools, however, have swayed towards cloud-deployed Kubernetes. Federated Ingress, for example, currently only supports clusters hosted in Google Cloud.

Therefore, to work towards this edge platform vision, new tools must be created to address multicluster orchestration in raw Kubernetes. To that end, let’s walk through a high-level architecture for a multicluster application manager.     

Diagram — edge application management and multicluster DNS

The central principle of the edge app manager is that clusters should be able to act as autonomously as possible, without relying on a central control plane. That said, a certain amount of logic must run centrally.

For instance, there should be an aggregated user interface and an API to allow app developers to deploy workloads to the edge without having to interact with individual clusters. There might also be a central DNS server that can route incoming edge traffic. For example, say that one edge cluster is running an application but another isn’t. An end-device makes a request to this application and, with anycast, the DNS request routes to the nearest edge cluster. This edge cluster talks to the central DNS server, and gets back the list of clusters that are running the relevant application. In this way, the edge DNS server could forward the device’s request to the closest edge site running the application, and tell the end-device: “I’m not running what you need, but my neighboring cluster is.” Custom CoreDNS plugins running both centrally and at the edge is one way to accomplish this.

Other features of this edge app manager might include load balancing and traffic control between services across clusters (multi-cluster Istio), along with cross-cluster scale-up and scale-to-zero, unified authentication and security policies and cloud provider integration. This edge app manager, then, might in the future comprise several different tools, both new and existing, to orchestrate applications across these micro datacenters.

To close, it is an exciting time for edge computing, and the time is now to create workload-agnostic, cloud-native platforms to run edge applications. Let’s unite around open source to help accelerate exciting edge use cases.

Megan will be speaking about “Are You Ready to Be Edgy? — Bringing Cloud-Native Applications to the Edge of the Network” at KubeCon + CloudNativeCon EU, May 2-4, 2018 in Copenhagen, Denmark.

This article was contributed by Cisco on behalf of KubeCon + CloudNativeCon Europe, to be held May 2-4, 2018, in Copenhagen, Denmark.

Feature image via Pixabay.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.