Edge computing is a variant of cloud computing, with infrastructure services for compute, storage and networking placed physically closer to the field devices that generate data. This eliminates “round trips,” so to speak, to the data center and increasing service availability. Since its introduction, edge computing has emerged as a proven and effective runtime platform to help solve unique challenges across telecommunications, media, transportation, logistics, agricultural, retail and other market segments.
Kubernetes has rapidly become a key ingredient in edge computing. With Kubernetes, companies can run containers at the edge in a way that maximizes resources, makes testing easier and allows DevOps teams to move faster and more effectively as these organizations consume and analyze more data in the field.
With data being created at an unprecedented rate, companies must consider how economical it is to transfer data from the edge to the core and whether it is less expensive to filter and pre-process data locally. Workloads that aren’t subject to demanding latency requirements should continue to be served by the most optimal cloud solutions possible. However, the coming wave of new uses cases requires operators to rethink how the network is architected. And that’s where edge computing comes in.
This provides three benefits. Firstly, lower latency, which boosts the performance of field devices by enabling them to not only respond faster but to more events. Secondly, lower internet traffic, which helps reduce costs and increase overall throughput, allowing the core data center to support more field devices. Finally, for internet-independent applications, higher availability if there is a network outage between the edge and the core.
Interest in edge computing is being driven by exponential data increases from smart devices in the IoT, the coming impact of 5G networks and the growing importance of performing artificial intelligence tasks at the edge — all of which require the ability to handle elastic demand and shifting workloads. As a result, Gartner says the amount of enterprise-generated data that is created and processed outside a traditional centralized data center or cloud will soar from 10% today to 75% by 2025.
Edge clouds should have at least two layers — both of which will maximize operational effectiveness and developer productivity, though each layer is constructed differently.
The first is the Infrastructure-as-a-Service (IaaS) layer. Besides providing compute and storage resources, the IaaS layer should satisfy the network performance requirements of ultra-low latency and high bandwidth.
The second involves Kubernetes, which has become a defacto standard for orchestrating containerized workloads in the data center and the public cloud. Kubernetes has emerged as a hugely important foundation for edge computing.
While using Kubernetes for this layer is optional, it has proven to be an effective platform for those organizations getting into edge computing. Because Kubernetes provides a common layer of abstraction on top of physical resources — compute, storage and networking — developers or DevOps engineers can deploy applications and services in a standard way anywhere, including at the edge.
Kubernetes also enables developers to simplify their DevOps practices and minimize time spent integrating with heterogeneous operating environments, leading to happy developers and happy operators.
So how can an organization deploy these layers?
The first step is to think about the physical infrastructure and what technology can be used to manage the infrastructure effectively, converting the raw hardware into an IaaS layer.
Operational primitives are needed that can be used for hardware discovery, providing the flexibility to allocate compute resources and repurpose them dynamically.
Technology exists to automatically create edge clouds based on KVM pods, which effectively enable operators to create virtual machines with pre-defined sets of resources (RAM, CPU, storage and over-subscription ratios).
Once discovery and provisioning of physical infrastructure for the edge cloud is complete, the second step is to choose an orchestration tool that will make it easy to install Kubernetes, or any software, on the edge infrastructure.
Then, voila, it’s time to deploy the environment and start onboarding and validating the application.
It will be fascinating to watch as more and more organizations adopt this model in the years to come.
To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon NA, Nov. 18-21 in San Diego.
Feature image via Pixabay.