Data / Frontend Development / Machine Learning / Technology

Machine Learning Models to Predict the Next ‘Stranger Things’

20 Jun 2022 7:00am, by
Stranger Things

Last week I attended The AI Summit in London, where I discovered how companies are using machine learning (ML) and other forms of AI technology. One session I enjoyed was run by Branded Entertainment Network (BEN), a global product placement and influencer marketing company. Normally I run away from anything to do with “influencers,” but this turned out to be an interesting case study in modern ML best practices.

Incidentally, I discovered later that BEN was founded by none other than Bill Gates, back in 1989. He’d wanted to build a digital artwork system for homes (the original company name was “Interactive Home Systems”). It’s now a much different company, but BEN still makes heavy use of software — the phrase “custom-built AI” is used four times on its homepage.

Jeff Barlow, vice president of engineering at BEN Group, began his portion of the session by discussing the “pyramid of engineers” for data engineering (see image below). His point was that data engineers lay the foundation with “data pipelining” and other backend tasks, data scientists build on that by adding their models and ML training, and finally, data analysts query the data and present insights visually.

Pyramid of engineers

Pyramid of engineers

Barlow then explained that BEN has invested significantly in Infrastructure-as-Code and has “a huge emphasis on templating” when it comes to managing its AI software. This approach, he said, allows them to help a data scientist “quickly add value to our organization.”

The Components

The Components of BEN’s ML platform.

Barlow explained more about the “flow” of how a data scientist works at BEN. The data scientist will create a new model from an existing template, which includes a project skeleton, CI/CD deployment recipes, job definitions, documentation and a “skeleton app configuration.”

flow

The Flow

The benefit of the templating process, said Barlow, is that data engineers don’t have to worry about “the minutiae around AWS IAM policies [identity and access management], security groups, and stuff like that — we’ve baked that into the template.”

The data scientists then experiment with their data, using these templates. “Once they have something that they think is viable,” continued Barlow, “then they’ll go to the next step, which is registering their model. Registering their model entails storing any applicable binaries for easy access.” They then register the data store schema, to ensure consistency. Next in the workflow is setting up automated monitoring, if necessary.

The Flow

Jeff Barlow (left) explaining BEN’s flow.

The final part of BEN’s “flow” is deploying the result of a model into a web service. The deployment pipeline “reads the appropriate Dockerfile information in our configuration, that we set up earlier, that can then post that web service into production,” Barlow explained.

Then it is “rinse and repeat,” he continued. “You’ve now started to collect some data on how your model is performing, how users are interacting with that model.” After getting this feedback, the data scientist might opt to do more experimentation on the model, register it again, and then deploy another web service.

Barlow then talked about scaling the ML model, by deploying it as an application for end-users or by creating an API for developers to use. This enables the data scientists to “experiment rapidly to figure out what types of information [are] useful to our end users,” according to one of his slides. The same slide also noted that “data scientists are not frontend developers,” so Barlow said that templating is once again used to enable them to quickly publish an application or API.

Use Cases

So now we know how BEN runs its ML models and turns them into apps. But what kinds of use cases does BEN have for its “custom-built AI” software?

Tyler Folkman, chief technology and AI officer at BEN Group, talked about several use cases involving influencers on apps like TikTok and YouTube. Folkman claimed that BEN’s analysis was a big part of driving gamers to adopt TikTok as a platform. “One of the biggest performing gamers we brought over to TikTok was identified by the AI,” he said. “It wasn’t in conversations with the brand, it wasn’t in conversations with our internal experts. It was the AI coming to the table with some recommendations.”

Another interesting use case was getting AI to “predict viewership past 30 days at a weekly level for TV shows.” He cited the example of “Stranger Things” season one, before it became a cultural sensation. When it was first released, nobody knew that it would be a hit show — but could AI have predicted that?

BEN’s data scientists built a Bayesian model “that considers multiple factors including new season releases.” The model used “long-tail predictions for MMM data” (MMM stands for “media mix modeling”) that helped BEN’s customers — mostly large brands — do product placement on tv shows.

According to a graph shown by Folkman, their ML predictions closely correlated with actual viewership of shows once they ran.

BEN's Predictive viewership graph.

BEN’s Predictive viewership graph

“You’ve got to get your product in before they film, so you have to be committing to something before you really know what it’s going to do,” explained Folkman. So in this use case, the ML model helped BEN convince its customers that product placement on a new tv show was likely to produce results.

ML Is Influencing the Influencers

You don’t typically think of influencer marketing as being machine-led, but in BEN’s case, the data does appear to show that ML models help with predictive modeling and spotting non-obvious talent for the likes of TikTok.

It’s yet more evidence that AI isn’t just about the future of work — it’s already influencing the present.

Feature image via Shutterstock.