Why the Edge Is Open

Packet sponsored this post.

Thanks to the open source community, moving workloads to the edge is fast becoming a reality. New latency-sensitive workloads, such as smart vehicles, augmentation reality and other real-time driven applications are driving this edge adoption.
The edge consists of compute and storage resources deployed to locations to minimize the latency to the connecting devices in the field such as IoT and mobile devices. For developers, these edge locations provide the resources to support these field-deployed devices.
We’re seeing open source technologies leading the charge across the entire edge stack: from physical infrastructure (Open19), computer processors (RISC-V) and of course throughout the software stack (Akraino, Kubernetes, OPNFV — and the list goes on).
Historically, the ability to host workloads at edge locations has not been available to the general public. Startups and less entrenched companies have been unable to deploy applications to edge locations owned and operated by telecommunication operators. Distributed edge infrastructure providers, such as Akamai, are designed for a single purpose and not open to other general use cases.
However, recent changes in cell tower ownership, the availability of Citizens Broadband Radio Spectrum (CBRS) and the widespread adoption of open technologies has changed the landscape and made the edge truly available to all.
Meet You at the Tower!
Over the years, telecommunications operators have divested themselves, through sale and leaseback, of physical assets like cell towers and regional data centers. This has helped to create massive operators like Crown Castle, SBA Communication and American Tower — all of which also have invested in fiber and “small cell” assets. These tower operators, as well as real estate investment trusts (REITs) like Brookfield, are looking to bring in additional lease revenue, are starting to make these cell tower locations to edge operators.
Packet is deploying small data centers at dozens of regional and edge locations for customers with early “edge” use cases like Sprint and Hatch. Meanwhile, they are making infrastructure and network services in those locations available in a fully automated, public cloud manner. Need some compute at the network edge? Just make an API call and you can get the bare metal compute needed to run an edge application in dozens of locations is yours in just a few minutes.
Open Isn’t Just for Software Anymore
Projects such as Open19 and the Open Compute Project (OCP) have been leading the way in standardizing the physical side of data center infrastructure, enabling Packet and companies like VaporIO to open these edge locations at a reasonable operating cost. These projects bring efficiency techniques from hyperscale cloud environments (think power bus bars) to standard 19-inch racks that may be deployed in very small amounts. Open19 is especially aggressive about optimizing for such “subscale” deployments, removing the cables from servers to reduce install costs dramatically.
Silicon is also innovating for life at the edge: high core count Arm and AMD EPYC processors are ideal for these edge locations. The open source RISC-V architecture is quickly gaining steam, allowing companies to customize silicon for their specific hardware or application needs. These technologies have the potential to drive down costs and increase options for allowing these edge locations to be made available to the public.
Can You Hear Me Now?
True edge access isn’t as useful without the accompanying wireless or last-mile connectivity. In addition to traditional carriers we also now have citizens Broadband Radio Service (CBRS). This newly repurposed public spectrum, formerly in use by the U.S. Navy and commercialized by the likes of Federated Wireless, allows developers to deploy reliable wireless service in localized areas. Early use cases include stadiums, airports and industrial zones.
CBRS has the potential not only to drive down latency, but also provide widespread access for developers to wireless technologies, enabling the next generation of edge applications to flourish. With just a few additional API calls, compute infrastructure at the edge can be tied into a CBRS antenna and whatever additional storage or custom processors (such as GPUs) are required by the edge application.
All Together Now
And at the software layer, edge-aware open source technologies in cloud and clustering projects are flourishing. In addition to work at the Cloud Native Computing Foundation (CNCF), the Linux Foundation has created LF Edge to house software projects focused on the edge and the OpenStack Foundation has released Airship and StarlingX (both aimed at widely distributed computing). These edge aware software stacks are enabling the development of edge aware applications.
This open approach to all layers of the stack is fascinating and also critical. As edge technologies move forward, we need to keep investing in open approaches, from the physical infrastructure, wireless connectivity and silicon to the software stack. This will ensure that innovation at the edge accelerates and that new ideas and business models are able to take root and grow.
The Cloud Native Computing Foundation and the Linux Foundation are sponsors of The New Stack.
Feature image via Pixabay.