Machine Learning for Real-Time Data Analysis: Training Models in Production
Some of the most sophisticated real-time data analytics involves training advanced machine learning models while they’re deployed in production. With this approach, the models’ weights and features are continually updated with the most recent data available.
Consequently, model outputs become more refined, precise, and accurate for highly specific segments of any particular use case.
Streaming data platforms and streaming data engines are ideal for this form of real-time data analysis, since they supply the ongoing data necessary to tailor model responses with low latency. This data informs the feature selection process that enables models to adjust to a vast array of circumstances that impact their results.
According to Gul Ege, SAS Senior director of advanced analytics, “It makes a lot of sense for the product and user data, and their features and their selections, to be updated, and the model to be updated, as they change.”
Supporting use cases span everything from computer vision monitoring to online recommendation engines for ad tech, insurance technology, e-commerce and more. With such a wide variety of applications, the capacity to simultaneously train and deploy machine learning models is becoming increasingly vital to the advancement of real-time data analysis.
Training in Production
Recommendation engines provide a good example of the utility derived from training machine learning models while they’re in production. Regardless of the particular application, this methodology is considered a progression of that in which models are trained offline, deployed online, and then compared against their offline performances to see if their scores have changed. There’s a dichotomy of the feature selection process for these applications, as illustrated by an ad tech use case in which real-time recommendations surface ads based on someone’s most recent clicks on an e-commerce site, for instance.
“You have the features of the product and features of the person, and what the recommendation system should recommend is dependent on both,” Ege specified. Although the features of the product may not be as dynamic as those of the users browsing the site, the ability to align them, in real-time, with the latest data is essential for producing timely, relevant recommendations.
“The features are the behavior of the end user, what their interactions are with the site,” Ege commented. “And, the product has features. If I’m looking for a red skirt, please don’t show me blue trousers or a purse.”
Historic Data Considerations
Despite the rapidity at which data is generated for delivering recommendations with this approach, model features are also informed by certain historic data considerations. The training period is rarely instantaneous and often a continual one in which the model tends to perform better over time. According to Ege, for many deployments in which models are trained, deployed, and updated online, “Some of them take some time to warm up. You can start with the first optimization of, let’s say, a customer making a transaction. And then, the same customer comes back again, making another one. So, the model warms up over time.”
Each of the behaviors in the respective transactions impacts what the model learns about that customer, others like him or her, or however else organizations have segmented the data for the models’ predictions. “As long as those [behaviors] exist and the history of them exist, you can build up the history online actually and make the recommendation,” Ege mentioned. Results are frequently improved by deploying multiple models — and algorithms — to address a particular business problem.
For use cases of InsurTech (in which quotes and varying insurance products are offered to customers in real-time after they input information online), organizations “might have multiple algorithms running underneath that fit the situation better,” Ege observed. “They all have slightly different data availability. It depends on how much history you have and the features you have. It’s different flavors of the same problem.”
Training Offline, Deploying and Scoring Online
Despite the propensity to accelerate the data science process by simultaneously training and deploying models online, there are still situations in which real-time data analysis benefits from keeping these two steps distinct. It’s not uncommon for models to be crafted and trained offline, then deployed online with real-time event data to score models — and their results — before comparing their performance to their performance offline.
One of the determinant characteristics for adopting this time-honored method pertains to the quantity and variation of data required for the model’s training. These concerns are especially relevant in cases in which “the technique or the problem needs more data than whatever is going to stream to that large model,” Ege pointed out.
By training models offline, organizations have greater latitude to inform the models’ learning with a wider selection of data and greater amounts of historic data — such as financial records for determining churn, for instance, that date back several years. The basic premise is that such models “need to be trained with enough data to capture the normal, so that you can then capture the abnormal when you deploy them,” Ege noted.
This requirement applies to certain anomaly detection applications. Once the training period for those models is completed offline, users can still score them online to monitor their performance with streaming data. Examples include “computer vision for quality control,” Ege said. “If you’re manufacturing something and there’s a crack or something, the sooner you detect it and take it off the line, the less money you lose.”
Core Value Proposition
It’s becoming fairly commonplace to employ machine learning models for real-time data analysis. Traditional data science measures for these applications entail creating models offline before inputting them into production online. As Ege revealed, there are still scenarios in which this method is advisable.
However, the ability to train models while they’re in production, while updating their features and weights based on real-time inputs, is critical for ensuring models are reacting to the most recent data available. Being able to do so is foundational to real-time data analysis’s core value proposition of acting in the moment, while also ensuring machine learning is as useful as possible for fulfilling this objective.