Better Data Logistics Is the Key to Effective Machine Learning

When humans interact with modern machines, there is almost always some kind of machine learning program running in the background.
The quality of that machine learning model, and therefore the quality of the human’s experience, depends on the quality of the underlying data — the more data the model has access to, and the more up-to-date that data is, the more accurate the model is going to be.
But in many cases, organizations fail to manage data logistics in a way that leverages the most high-quality, up-to-date data for their models.
As a result, the quality of the machine learning models suffers.
This isn’t an academic problem — machine learning models that don’t work well cause real-world disasters, from the Navy getting a false positive or negative on a threat detection system, to oil spills in a pipeline going undetected, to being unable to make a critical purchase because a credit card has been flagged for fraud.
These types of errors give machine learning a bad name. And many use cases for ML remain unexplored because of uncertainty about how the data required for the training models would be sourced and prepared for inclusion in model training datasets.
Data Logistics Today: Bottlenecks and Wishful Thinking
The reason these machine learning models don’t work is that there’s nothing modern about how edge data moves from place to place. In some cases, teams must resort to shipping physical hard drives via FedEx. This dramatically reduces the usefulness and availability of data, and therefore the ability of edge devices to make better decisions through higher-quality models.
Even in high-connectivity environments, moving data around is prohibitively expensive. And data engineering teams are so overworked that any change to the flow of data that an ML engineer requests will most likely get assigned a ticket that will wait in a queue for months before someone can address it. Iteration becomes impossible.
The industry is generally aware that physically shipping hard drives around the globe is a poor way to extract data and update models. But when teams start working on ways to improve the movement of data, they often start with a set of assumptions that simply don’t hold true in the real world.
Most data logistics architectures assume uninterrupted connectivity, which is reality in precisely zero situations. Even the highest-connectivity environments are going to suffer outages — all of the public clouds have outages; data centers have outages; networks have outages; cities have power failures.
To make matters worse, many safety-critical ML applications use data that’s collected in low-connectivity environments and run in low- or no-connectivity environments.
As a result, projects fail to make use of the data being collected in the field and to harness it to create more powerful, more accurate ML models that can accurately identify threats ranging from a dangerous situation in an oil pipeline to a slippery spill in a big box store.
We have massive computing power available to us, but our inability to move data around in a way that works in the real world hampers our ability to leverage that power and create applications that solve real problems in the real world.
Better Data Logistics
When we talk about data logistics, we’re talking about the process of moving data from point A to point B. It’s just like regular logistics of physical goods — the process by which something is moved from one point to another.
We have to think about data logistics if we want to get value out of the enormous quantity of data we’re collecting, because data has no value unless it’s used and analyzed, which requires it to move. The only business reason for data to be truly at rest is when you need to store data for compliance reasons — and you may need to retrieve that data too.
Data is critical to computing, modern or otherwise. Computer science boils down to the practice of mutating data and displaying data to users. How our infrastructure handles data and moves data from point to point is critical to making applications that are both a technical success as well as a business success.
Effective data logistics needs to be built for the real world. It should be able to automatically sync data when connectivity is available and collect and store data when connectivity is lost, all without conflict, data loss or failures due to poor connectivity. It should be simple enough that it can be adjusted without an experienced data engineer. It should be as declarative as the postal service: Declare your data’s destination and let the data logistics system take care of the rest.
Right now, a lack of effective data logistics is preventing machine learning applications from reaching their potential. Let’s fix that.