5 Requirements for Data at the Edge

MayaData sponsored this post.

At KubeCon+CloudNativeCon NA 2020, the chair of the Cloud Native Computing Foundation’s technical operating committee, Liz Rice, named edge as the second leading trend for 2021.
While we don’t have specific data on where the widely deployed OpenEBS is being deployed, due to its open source nature, surveys and our commercial business indicates more edge deployments. Recent contributions in the multi-architecture image efforts also suggest that OpenEBS is being used increasingly on ARM-based clusters. Additionally, the edge focused cloud provider Volterra selected OpenEBS Mayastor; as did another edge-focused managed service provider, Platform 9. We are also working with a handful of large system integrators as they help service providers and retailers build out their invariably Kubernetes-based edge deployments.
In short, we hear every day about requirements for data at the edge on Kubernetes. In this article, we share some of what we are hearing.
#CNCF TOC chair @lizrice is sharing the 5 technologies to watch in 2021 according to the TOC:
1. Chaos engineering
2. @kubernetesio for the edge
3. Service mesh
4. Web assembly and eBPF
5. Developer + operator experience pic.twitter.com/aSRDTB0piN— CNCF (@CloudNativeFdn) November 20, 2020
Edge Requirements
We see five primary requirements for the management of data at the edge:
- Kubernetes native: Now that Kubernetes is the control plane, any solution that behaves differently than Kubernetes is somewhat suspect to the users we work with. Everyone from the U.S. Air Force, to AT&T, to the largest physical retailers in the world, see Kubernetes as their control plane for the edge. These sorts of users want to see Kubernetes extended as needed with operators, CRDs and containers; and that’s about it. If it does not fit into their toolchain and their expertise, it is suspect.
- Hyperconverged architectures available: At the edge, space, power and management capabilities are at a premium. Using locally attached storage extended by container attached storage reduces the footprint dramatically, compared to attaching a cluster of CEPH or another scale-out storage system or array. And high-performance container attached storage increases the possible density versus simply using a local disk since it makes it possible to run and store multiple workloads much more easily than dedicating entire hosts to particular workloads (via the use of direct-attached storage).
- Community and Open Source: This is similar to the first point about being Kubernetes native — users do NOT want to be locked into the proprietary solution of any vendor. They want and expect to find a vibrant community, independent from any one vendor. I find it ironic that while mega-firms like Microsoft and AWS increasingly embrace open source — with a recent example being AWS open-sourcing the Kubernetes configuration they use in their EKS, for example — there is still a tendency in storage, network and security to try to sell proprietary boxes and software systems. When a user is looking at an edge architecture that will entail many thousands of locations over many years, they seem to be especially nervous about legacy business models. After all, the demand for Kubernetes itself is driven (in part at least) at the C-levels within organizations, by a desire to limit or eliminate cloud and vendor lock-in.
- Performance: On the one hand, many workloads at the edge are not that performance sensitive. On the other hand, some workloads — such as video analytics — certainly are, and as mentioned above in the second point, every I/O enabled by the underlying hardware and not passed through by the storage wastes heat, space and money.
- Enterprise-level support and solutions engineering: Finally, building and operating a distributed application is hard, and there are not that many users that are comfortable with their existing levels of expertise in building and running massively distributed systems — including edge locations. Companies like Volterra, Mavenir and Platform 9 — and at least sections of the global systems integrators we know — seem to be flourishing. They have different businesses and technologies, but what they have in common is proven expertise at building and helping to operate Kubernetes-based systems at the edge.
Conclusion
As you might imagine, we think our software fits well with the above trends. The most recent CNCF Survey also supports our findings, in terms of OpenEBS adoption and trends. OpenEBS itself is a CNCF project and we wrap around it our support and other open source software, to productize it for the edge and other use cases.
Please try it out and share your feedback with us. You can apply now to receive a free trial license of Kubera Propel.
The Cloud Native Computing Foundation and KubeCon+CloudNativeCon are sponsors of The New Stack
Feature image via Pixabay.