What’s Holding Back Edge Analytics?

By 2025, it’s predicted there will be 41.6 billion Internet of Things (IoT) devices generating 79.4 zettabytes of data. While this explosion of connected devices will benefit businesses by providing access to more data to derive better insights, it’s also going to put immense pressure on enterprise architecture. Moving massive amounts of data from devices and sensors to a data center or the cloud introduces issues with latency, bandwidth, connectivity and cost.
Businesses are turning to edge computing as a strategy to handle this influx of data and applying analytics at the edge to gain actionable insights in real-time. Rather than try to bring data to the data center or the cloud faster, an efficient approach is to bring processing and analytics to the devices that create the data.
Cloud Limitations

But that’s easier said than done. Today’s enterprise architecture isn’t designed to handle IoT applications, which prevents businesses from unlocking the tremendous intelligence IoT devices can offer. From the CIO perspective, the infrastructure for IoT applications creates issues with manageability, securability, scalability and integration. Drilling down into the integration issue, IoT infrastructure within the enterprise doesn’t deal well with today’s largely cloud-based architecture. The data communications platform is cloud-to-cloud and all the processing, even though it has been shrunk down to microservices, is still running in the cloud. Trying to move the processing from the cloud to IoT devices that are in the wild with limited connectivity is a challenge.
Consider a refrigerated truck hauling a valuable load of salmon roe. Outfitted with sensors, the truck needs to send its data to its organization’s cloud for someone to track for signs of trouble, such as if the load is being maintained at the right temperature. But when the truck travels through remote areas, it can’t rely on internet connectivity to send the data to the cloud for processing. That processing needs to happen inside the truck. Using these same sensors, developers can deploy and run analytics within the truck to monitor status, such as the level of condenser coolant in the refrigeration unit, receive warnings, and implement predictive maintenance and other ML algorithms.
Foggy Gateways
One approach to overcome the limitations of the cloud is to design a fog computing infrastructure, where IoT gateways are placed near the edge to process the data from IoT devices before sending it to the cloud. While gateways have a place in IoT architectures and bring efficiency in certain situations, they can also be easily overloaded and create difficulties in environments with limited connectivity or high-frequency events.
For example, envision a factory floor with 100 CNC (computer numeric control) machines taking samples of vibration analysis at a rate of 1,000 hertz. That generates 100,000 data points per second. With fog computing, there will be latency as the data has to first go to a network and then be processed at the gateway, which is not large or fast enough to handle that amount of data. This creates the same delays and cost issues as with moving large amounts of data to the cloud. By processing the data on the device the latency is reduced from potentially several seconds to a few milliseconds, so jobs can be stopped before catastrophic damage occurs to the part being machined or the CNC machine itself.
Skills Disconnect
The entire cloud computing enterprise architecture is built around high-level programming languages deployed on cloud servers, while IoT environments focus on embedded applications written in C/C++ that haven’t yet integrated with the enterprise architecture.
There is a disconnect in the skills that can cause friction between these two groups. The developers need to integrate the models from the data scientists into IoT applications, but they struggle to do so as they may not understand the programming languages.
Data scientists write code in R and Python for corporate applications and they don’t know anything about embedded programming, like what an LP1768 microcontroller is. They work with open source programming languages to create beautiful models that can predict when a compressor is going to fail, but they don’t know how to deploy them. And then there are C/C++ developers that worry about hardware-level plumbing, such as how do they get data off a BME280 sensor and an I2C bus and how do they get their code to fit into 256KB of memory. But they have no idea what R is.
There is a disconnect in the skills that can cause friction between these two groups. The developers need to integrate the models from the data scientists into IoT applications, but they struggle to do so as they may not understand the programming languages. The development tools used by data scientists have output formats for their predictive models, but those output formats don’t have analogs on the embedded devices. For example, a Spark-based model make assumptions that the developers will be using a Linux platform with a file system for a tiny database that includes centroids for k-means clustering. But, developers don’t necessarily have a file system on a microcontroller platform. The disconnect between the data scientists and developers slows down the process and adds costs to building and deploying IoT applications.
Come Down from the Clouds
Industry leaders recognize there is complexity with creating and deploying IoT applications within today’s enterprise architecture and is forming consortiums to create standards and frameworks like the Open Neural Network Exchange (ONNX), Predictive Model Markup Language (PMNL), TensorFlow Lite, and tinyML.
But these efforts need to be consolidated so there is an easier way to deploy models from data scientists. It’s still very complicated. One way this can be done is to adopt a cloud-based development mentality and use similar technologies, like containers, with microcontrollers. Cloud-based containers are too large for IoT devices, but they can be built smaller to be edge-native and deployed on microcontrollers. Container technology provides a develop-once, deploy anywhere capability that enables developers to create IoT applications across platforms. That type of thought process is missing from the world of embedded computing.
The industry understands there’s an issue with today’s enterprise architecture supporting IoT applications. By taking the familiar programming and concepts from cloud environments and adapting them for embedded computing platforms that can be run on sensors and microcontrollers, businesses will be able to truly capitalize on edge analytics to make intelligent, real-time decisions.
Feature image via Pixabay.