KubeCon + CloudNativeCon sponsored this podcast.
We have more and more devices connecting and adding strain to the cloud. This leads to latency and wasted bandwidth, not to mention the environmental impact) Edge computing, which tries to close the gap between devices and the internet connection, is a potential solution to reach faster Internet nearby. But does Kubernetes — the container orchestration most everyone is using anyway — have a future on the edge as well?
The New Stack publisher Alex Williams sat down with Steven Wong and Dejan Bosanac at KubeCon+CloudNativeCon North America to talk about just this. Wong is a software engineer on the cloud native business unit at VMware and Bosanac is a senior software engineer at Red Hat. Both are also on the Kubernetes working group that looks to address the limitations of Kubernetes on the edge.
Wong offered the example of modern luxury car models, which — from entertainment systems to antilock breaks to traction controls to even power windows and doors — may have as many as 37 computers riding around inside them. Each of these connected objects are sources of data that often need configuration and updates.
He looks at Kubernetes orchestration as being uniquely suited to solving this problem — eventually.
“The architects of Kubernetes started by building the foundation for this rich and, as it turns out, extensible API that can control things at massive scale through an API where you stated the desired outcome: ‘I want these things to be in this condition in the perfect world.’ And then things called controllers work in the background to relentlessly drive the state of the real world. They take note of what you said you wanted, measure what this thing is now, and relentlessly drive it toward your stated goal,” Wong said.
Wong went onto say that Kubernetes is “parallelizable” which means you can have hundreds and thousands of these driving the desired states. This extensibility means that it isn’t just limited to extensible applications on the cloud. It could even be used paired with edge computing at highly distributed retail sites.
Bosanac builds on that by pointing out that people trying to distribute compute resources is nothing new. To him, edge computing is just an extension of the reality of our cloud-based world — cloud-native architecture plus developer tools to build the applications that can run in the cloud and then be deployed to edge sites.
Bosanac is also a member of the Eclipse Foundation, which looks to build the fundamentals for an open-source Internet of Things cloud platform. This is trying to solve for the strain of all these new devices collecting to the cloud, instead allowing them to connect to the edge, which would result in improved latency and preserved bandwidth.
Wong offered the example of security cameras, collecting 30 images per second. It’s not useful to be uploading all that video to the cloud when you only want to be collecting license plates on vehicles or the faces of visitors. That machine learning, image recognition and tagging would be better done close-by on the edge, where there should be fewer errors, less latency, faster results, and saved bandwidth. It’s all about reducing it at the edge — Wong says it’s potentially a three-times reduction.
He continued that machine learning on the edge is really just containerized apps, which Kubernetes is used to managing anyway.
“In a way you can argue that edge computing really is cloud computing, you’ve just inverted the place where these containerized apps are going to run,” Wong said.
The Cloud Native Computing Foundation, Red Hat OpenShift and VMware are sponsors of The New Stack.