For much of the past decade, the talk in IT has been around the cloud, including how much of the data and which applications should reside there. However, the rise of the edge in recent years has shifted the thinking of many IT vendors and enterprises, challenging them to figure out what Gary Ogasawara is calling the “post-cloud” world.
“What’s happening is everyone’s recognizing that the vast majority of data is being generated out at the edge, with autonomous cars, IoT [Internet of Things] sensors, etc.,” Ogasawara, chief technology officer for object-based storage systems vendor Cloudian, told The New Stack. “The hyperscalers are moving from the cloud outwards to the edge and then there’s a lot of development from the edge, especially from the hardware vendors, inward to the cloud.”
Open source technology will be a key to bringing cloud native offerings to the edge, with organizations and developers being able to take advantage of greater agility, accelerated application development cycles and reduced vendor lock-in than in cloud environments. The Cloud Native Computing Foundation (CNCF) has helped enterprises embrace cloud native technologies in private, public and hybrid cloud environments.
Standardization Needed at the Edge
However, hardware and software standardization is crucial for any open source project to succeed and the edge currently is highly fractured, with players like Amazon Web Services (AWS), Microsoft Azure and Google Cloud using disparate standards and enabling limited interoperability, Ogasawara said.
Kubernetes will play a central role in paving the way for cloud native technologies to make their way to the edge, but even the open source container platform — whose development is managed by CNCF — faces challenges, he said.
“We ourselves have been working on moving Kubernetes and having it work for the edge,” he said. “At Cloudian, we’re working on the storage aspect of it, so it’s really part of the infrastructure for the edge. That’s giving us some challenges and we see the advantages.”
The edge will be an increasingly significant part of the IT picture — the third leg in a stool that also includes on-premises data centers and the cloud — as the amount of data created at the edge increases and demand for artificial intelligence (AI), analytics and automation grows. IDC analysts are predicting that by 2023, more than half of all new IT infrastructure will be deployed at the edge. This comes as Gartner is forecasting that by 2025, 75% of enterprise-generated data will be created and processed outside of central data centers or the cloud.
Extending Beyond the Cloud
There’s a rush by cloud providers and hardware and software makers to extend the reach of their products and services to the edge. For example, AWS has its Greengrass software that brings cloud capabilities — such as collecting and analyzing data — to edge devices. Azure offers its edge-focused Azure Stack Edge hardware-as-a-service and Azure IoT Edge, a fully managed service on the Azure IoT Hub. Google Cloud IoT includes a range of tools to connect, process, store and analyze data at the edge and in the cloud.
More traditional hardware and component makers also are positioning themselves for the edge. Nvidia trying to buy low-power chip designer Arm for $40 billion is an example. At the same time, Dell, Hewlett Packard Enterprise and others are pushing to offer systems that are smaller and rugged enough to fit in such places as oil rigs, manufacturing floors and retail stores.
“There are standardization efforts like Intel’s MECA [modular edge compute architecture] and some standardization effort on that as well by the Linux Foundation, which has a specific group called Linux Foundation Edge that’s focusing on what can be done to standardize on the hardware side on the edge,” Ogasawara said.
A Unique Edge
The uniqueness of the edge — from the remote locations and smaller environments — challenges vendors trying to move their hardware and software out there. For example, in storage, the software needs to be elastic, something Cloudian engineers have worked to ensure with its scalable software and other offerings, he said. By contrast, AWS may have a more difficult time moving its S3 object storage service to the edge because it wasn’t built for such an environment, he added.
APIs need to work at the edge as well as the cloud and data center and software needs to meet the demand for real-time decision-making at the edge for such jobs as autonomous driving.
“Making sure that software and APIs can respond to that real-time matter is another challenge that’s unique to the edge,” Ogasawara said. “If it didn’t need to be real-time, then we could take our time and go talk to the cloud and come back. But it’s a different, unique aspect of the edge.”
Kubernetes brings a range of benefits to hybrid environments that include the edge, he said. Software can be developed once and run anywhere, meaning a “software developer can write an application and have it work in Kubernetes and then take that same application that works at the edge and move it to a data center or they could also move into the cloud. That flexibility is very powerful and makes the software development process very efficient,” he said.
Kubernetes Is Key
As an overlay on top of the OS, Kubernetes can run atop different operating systems and leverage whatever hardware resources are available, from big servers with 20 cores and 256GB of RAM to edge systems with two cores and 4GB of memory. Organizations also can leverage the Kube-Scheduler to map jobs to available resources. The scheduler also lets enterprises right-size their environments, using only the resources they need and scaling up or down as necessary.
“That gives the user a lot of advantages in terms of economics,” the CTO said. “That’s why this is pretty important at this stage, as software developers are looking to build applications that run across those three layers of edge, data center and clouds.”
That said, there are challenges for Kubernetes, which is tackling a highly complex problem. There are various APIs that compute uses — data APIs for transferring data, for example — that can be standardized, enabling developers and infrastructure vendors to write to the same APIs. The more difficult part is the control APIs that address such issues as positioning and scheduling the software or specifying network options.
Software platform projects have failed in the past, with OpenStack being a prime example, Ogasawara said.
“It’s not a matter of adding more and more components and opening it up to more and more functionality,” he said. “That was the death of OpenStack. What you need is like how Linux is controlled by one very strong arbiter of what should go in and out. You need to make certain design choices on how you do the networking and what you expose that may not be the best for all possible situations but is the best overall to keep a simpler and or more universal framework for everyone to use.”
There’s also an ongoing balkanization of Kubernetes, with different vendors having their own flavor. Think of Red Hat with OpenShift, Google with Anthos, VMware’s Tanzu and SUSE Rancher and the lightweight K3s distribution. Those different offerings still must all implement the important parts of the Kubernetes API, Ogasawara said.
“As needs change, there’s going to be these differences in the different flavors of Kubernetes and then you get into the same problems,” he said. “Then you’re locked into OpenShift or you’re locked into Google Anthos and nobody except for those vendors wants that. Everyone has learned that API-first is a good strategy, so having the same control APIs for everybody is a very positive thing. It just improves the development for everybody to use it.”
Developers also have to deal with the CNCF trying to simplify aspects of Kubernetes, which can lead to a loss of detail. They move from creating software in a typical environment to Kubernetes and at times don’t have the fine-grained control they’re used to with Linux, such as looking at strace calls and system calls. With Kubernetes, much of that is purposely abstracted away to make it more portable for environments like the edge.
All these challenges are where CNCF comes in. The organization has done a good job of ensuring the disparate Kubernetes platforms adhere to certain universal norms — its certification test suite that all Kubernetes flavors are tested against being an example — to push back against the ongoing challenge of balkanization. In addition, the group has not allowed itself to be dominated by a single vendor.
“They take care to do that with their conferences and their decision-making process. They’ve benefited a lot from the ones that have gone before and fallen,” Ogasawara said. “But it’s an ongoing challenge because it’s a matter of continuity and maintaining this. As more and more people start using Kubernetes, it will be harder and harder to keep this type of control, because as more people want to use it, they want to use it in different ways.”
Amazon Web Services, the Cloud Native Computing Foundation and VMware are sponsors of The New Stack.