TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Edge Computing

Sharpening the Edge: The Linux Foundation Edge Framework and Taxonomy

The edge computing taxonomy and framework seek to balance various market lenses — e.g. cloud, telecom, cable, IT, OT/industrial, consumer.
Dec 3rd, 2020 12:45pm by
Featued image for: Sharpening the Edge: The Linux Foundation Edge Framework and Taxonomy

The Linux Foundation sponsored this post.

Vikram Siwach
Vikram is an LF Edge Governing Board Member and Lead product Manager at MobiledgeX. He lays out the infrastructure and virtualization landscape to move immersive and AI applications towards the edge.

Companies in a wide range of vertical markets are aggressively exploring new commercial opportunities that are enabled by extending cloud computing to the edge of the network. The concept of edge computing promises exciting new revenue opportunities resulting from the delivery of new types of services to new types of customers, in both consumer and enterprise segments.

Yet most edge taxonomies and associated language today are biased towards the point of view of one market / focus area. They often use ambiguous, “loaded” terms that can easily be misinterpreted (e.g. near and far, thin and thick). The new Linux Foundation (LF) Edge taxonomy is based on inherent technical tradeoffs spanning the edge continuum  absolutes that cannot be misinterpreted. It’s comprehensive for all markets, while highlighting the unique tradeoffs and holistic views that each market can build their preferred/unique language on top of.

Founded in 2019, LF Edge aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud or operating system. It has more than 70 member companies and nine-edge computing projects including Akraino, Baetyl, EdgeX Foundry, Fledge, Home Edge, Open Horizon, Project EVE, Secure Device Onboard and State of the Edge.

In this article, we shall introduce the full range of edge computing, its cloud native design principles, and its application for service providers. For more details, please download the full publication here.

Edge Continuum

Edge computing represents a new paradigm in which compute and storage are located at the edge of the network, as close as both necessary and feasible to the location where data is generated and consumed, and where actions are taken in the physical world. Edge computing is distributed cloud computing, comprising multiple application components interconnected by a network; as well as the delivery of intensive computing capabilities to the logical extremes of a network, in order to improve the performance, security, operating cost and reliability of applications and services. By shortening the distance between devices and the cloud resources that serve them, and by also reducing the number of network hops, edge computing mitigates the latency and bandwidth constraints of today’s internet — ushering in new classes of applications. The optimal location of these compute resources is determined by the inherent tradeoffs between the benefits of centralization and decentralization.

The edge computing continuum spans from discrete distributed devices to centralized data centers, along with key trends that define the boundaries of each category (see Figure 1 below). This includes the increasingly complex design tradeoffs that architects need to make, as the closer compute resources get to the physical world. The far right of the diagram shows centralized data centers representing cloud-based compute. These centralized facilities offer economies of scale and flexibility and can oversee the collective behavior of a large number of devices — for example configuring, tracking and managing them — but it’s limited by the centralized location of the data centers and the fact that the resources are shared.

Figure 1: Summary of edge continuum

Moving along the continuum from centralized data centers toward devices, the first main edge tier is the Service Provider (SP) Edge — providing services delivered over the global fixed/mobile networking infrastructure. Like the public cloud, infrastructure (compute, storage and networking) at the Service Provider Edge is often consumed as a service. Solutions at the Service Provider Edge can provide more security and privacy than the public cloud, because of differences between the public internet and the private networks. The Service Provider Edge is distributed and brings edge computing resources much closer to end users.

The second top-level edge tier is the User Edge, which is delineated from the Service Provider Edge by being on the other side of the last mile network. Sometimes it is a necessity to use on-premise and highly distributed compute resources that are closer to end-users and processes in the physical world, in order to further reduce latency, conserve network bandwidth, and increase security and privacy.

The edge computing taxonomy and framework were developed with careful consideration, seeking to balance various market lenses (e.g. cloud, telecom, cable, IT, OT/industrial, consumer) while also creating high-level taxonomy categories based on the key technical and logistical trade-offs shown above.

Edge Native: Extending Cloud Native to the Edge 

With containerization and Kubernetes, a rapidly increasing number of cloud native software applications are based on platform-independent, service-based architecture and Continuous Integration/Continuous Delivery (CI/CD) practices for software enhancements. The same benefits of cloud native development in the data center apply at the edge, enabling applications to be composed on the fly from best-in-class components — scaling up and out in a distributed fashion and evolving over time as developers continue to innovate.

Many web-scale design principles can be applied to implement cloud-like compute capabilities at the Service Provider Edge. Over the last few years, orchestration technologies like Kubernetes have made it possible to run cloud native workloads in on-premise, hybrid or multicloud environments. Most applications offloaded to the Service Provider Edge will not require significant changes in their design or code and will retain continuous delivery pipelines that can deploy specific workloads at Service Provider Edge sites, such as those which have low latency, high bandwidth or strict privacy needs. In addition, workloads may interact with networks in complex ways, such as to prioritize Quality of Service (QoS) for specific applications based on needs such as giving priority to life safety applications.

Major content owners like Netflix, Apple and YouTube are expected to retain their cache-based distribution models, which entail storing states in the centralized public cloud along with Authentication and Authorization (AA) functions, while redirecting the delivery of content from the “best” cache as determined by Quality of Experience (QoE) at the client device — where “best” doesn’t always means the nearest cache. This approach will be retained for other distributed workloads utilizing edge acceleration, like Augmented Reality (AR), Virtual Reality (VR), Massively Multiplayer Gaming (MMPG), etc.

According to the design principles mentioned above, the Service Provider Edge will need to ensure a deterministic method of measuring and enforcing QoE, based on key application needs such as latency and bandwidth. As most internet traffic is encrypted, these guarantees will likely be based on the transport layer, leading to the evolution of congestion control algorithms — which determine the rate of delivery. A similar design principle will evolve for geographical data isolation policies for stores and workloads, beyond just complying with global data protection regulations.

The below image shows an example deployment of highly-available edge applications at the Service Provider Edge, which could be federated across multiple service provider networks at peering sites, while also cooperating with public cloud workloads.

Edge Application Deployment at the Service Provider Edge

Developers can study the geographical consumption patterns of their customers, as well as determine the optimal latencies and QOS requirements of their applications. Using Machine Learning (ML) algorithms, they can even predict how these patterns might change over time, for advanced planning purposes. Orchestration services will emerge, and these will allow developers to specify their workload requirements in order to provide automated placement.

The deployment of application backends can be independent of network mobility or specific device attachment. Backend services deployment can be based on a number of different strategies to enable mobility of edge applications, including:

  • Static, whereby the developer chooses the specific edge sites and the specific services for each site. Dynamic, whereby the developer submits criteria to an orchestration service and the orchestration service makes best-effort decisions about workload placement on behalf of the developer. One implementation of this would have developers choose a region in which they yield control to a system operator’s or cloud operator’s orchestration system, in order to determine the optimum placement of workloads based on the number of requested compute instances, the number of users and any specialized resource policies.

The Akraino project is working on blueprints for the lifecycle management of edge applications, based on the following workflow for deployment:

  • Create the cluster, deploying microservices as a set of containers or Virtual Machines (VMs);
  • Create the application manifest, defining an application mobility strategy that includes QoE, geographical store and privacy policies; Create the application instance, launching the Edge Application and autoscaling.

For more information on this topic, please visit the developer section for the Akraino Edge Stack project.

Identifying the Optimum Edge Location to Serve a User

The nearest edge location is not always the best. Instead, clients must be steered to application backends based on the most recently recorded QoE for the application at each geographically-located edge site. The network may provide QoS mapping to improve QoE.

Based on this design, an application discovery engine could be embedded across multiple CSPs — which records the health of the application backend and the QoE for each application, across all edge sites within a region, and exposing a control API to identify the best location. This API can also be used to tune the rate of content delivery for the best experience. For example, content services like Netflix and YouTube maintain dozens of different bitrate encodings for the same movie or TV show, so that the optimal resolution can be delivered based on device characteristics, network congestion and other factors. A discovery engine can be employed that would return a ranked list of Uniform Resource Identifiers (URIs), identifying the optimum sites nearby, based on selection criteria that include:

  • Edge application instances in sites geo-located based on the client’s location;
  • URI rank based on recent Layer 4 QoE measurements (latency and bitrate).

The LF Edge Akraino Edge Stack project has defined such an Application Discovery engine: please visit the Find Cloudlet section. For more information on discovery and Control APIs, please read the following supplemental paper: Akraino Edge Stack APIs.

What About Mobility?

Application mobility is based on resource awareness, a backend for stateless applications that can move across zones based on compute capacity, specialized resources and/or Service Level Agreement (SLA) boundaries. Stateful edge applications synchronize states from centralized servers to the edge and redirect them at Layer 7 to edge applications, operating consistently regardless of an individual CSP’s orchestration system.

Device Mobility is based on route awareness. Provider networks are designed to be anchored to gateways, which leads to a suboptimal routing structure counterintuitive to latency-sensitive workloads running at the COLO edge. The good news is that it can be changed by leveraging container mobility techniques used by web-scale companies. But that requires not just virtualizing the compute (VNF/CNF), but also virtualizing the networks — such that underlying IP routing can be based on the identity of the application and location of the device. Identifier-Locator Addressing is a means to implement network overlays without the use of encapsulation, which can help achieve anchorless device mobility.

For more details and best practices to deal with Application and device mobility, and Service Provider edge design considerations, read the service provider edge section.

I wish to thank all the participants in the Technical Architecture group, who came together under LF EDGE to harmonize a framework for technical and business aspects of edge computing. The effort spread across open source edge projects and different communities focused on IoT, Enterprise, Cloud and Telecom.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Uniform.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.