Data / DevOps / Machine Learning

Fiddler Drills into the Decisions Behind AI Decision-Making

10 Dec 2020 5:00am, by

Too often, for companies using artificial intelligence, the technology is essentially a black box with little visibility into how their models are performing or how they arrive at their decisions.

Fiddler Labs wants to change that.

“One of the problems that AI teams are suffering today is that when they operationalize AI, they are flying blind. So it’s essentially like your AI is this really complex jet engine that you’re installing inside your organization, and you really don’t have a cockpit view of what’s going on within your AI systems,” said Krishna Gade, Fiddler’s founder and CEO.

Palo Alto, California-based Fiddler Labs grew out of Gade’s experience at Facebook and that of his co-founder, Amit Paka, at Samsung, where he worked on recommendations in shopping apps. The third co-founder is Manoj Cheenath, the company’s chief architect.

A broad range of industries are using machine learning and artificial intelligence, including finance, insurance, ecommerce, transportation and healthcare. IDC predicts global spending on AI systems will reach $97.9 billion in 2023, more than 2.5 times that spent in 2019. But companies and consumers don’t necessarily trust automated decision-making and increasingly want to understand how those decisions are made, particularly in instances such as healthcare or driverless cars.

Cracking the ‘Black Box’

Gade was an engineering leader at Facebook working on news feed ranking. Various teams used the technology to recommend friends, groups you might want to join and other features.

“This became particularly important after the 2016 elections when there was a lot of news around mistrust around Facebook’s News Feed, potential fake news and misinformation and all of that. So we needed to invest in debugging tools and diagnostics to unlock the machine learning models that are powering News Feed to give insights into questions like, ‘Why am I seeing the story?’ ‘Why is this story going wider?’ “Why is this publisher doing so well,’ and so on, so forth,” Gade explained.

It became a cross-company effort to provide feedback, and ultimately led to an end-user-facing version called, “Why Am I Seeing This?”

“[So] I decided to actually start a company because there wasn’t really anything at the time to help AI and ML teams to deploy and operationalize AI in a responsible and trustworthy manner,” he said of founding the company in 2018.

He ticks off four essential problems with AI systems:

  • Lack of transparency
  • Model drift
  • Potential bias
  • Compliance — or lack of

The COVID shutdowns, in particular, have created ever-greater potential for model drift, he said.

“Because AI really works on finding patterns on historical data to predict the future. … when your data itself changes when your current reality is changing every day, because of COVID, for example, businesses that were applying for loans, are not the same businesses that applied for loans last year or when everything was good in the economy. And then are people who lost their jobs and are applying for loans who are not the same people who applied [before]. Things that are being bought on e-commerce stores are not the same items that were bought before COVID. And so your models need to be retrained continuously, need to be monitored so that you have this visibility into what’s going on,” he said.

A full understanding of your models also can provide insight into decision-making across different demographics, such as age, gender, ethnic group or region.

And governments are introducing many compliance regulations. Article 22 of the European Union’s General Data Protection Regulation (GDPR), for instance, gives citizens the right to an explanation behind automated decision-making that affects their lives. And in certain industries, such as financial services, if you are running models that are non-compliant, you can get into litigation and heavy fines, he said.

“As per IDC research, lack of Machine Learning Operations (MLOps) and Trustworthy Artificial Intelligence (AI) are two of the top three challenges in realizing AI at scale,” said Ritu Jyoti, program vice president, Artificial Intelligence Research at IDC. “Fiddler Labs’ pluggable and explainable AI engine is enterprise-ready. Its easy integrations to multiple data sources on cloud or on-premise environments, and working with a wide variety of custom-built models, is empowering businesses to manage, deploy, monitor, and explain AI models in a flexible and efficient manner across a broad set of industry verticals.”

‘Explainability’ Engine

Fiddler faces competition from the likes of IBM’s AI Explainability 360, an open source collection of algorithms for explaining AI model decision-making; Microsoft’s lnterpretML; Google’s What-If Tool for TensorBoard; and the startup Kyndi, building what it calls an explainable natural language processing platform.

Facebook created a tool called Fairness Flow to help detect bias in AI. Accenture’s Teach and Test is another.

Fiddler relies on recent research and methods such as Shapley values and Integrated Gradients to understand model predictions.

The team built proprietary plug-and-play technology with what it calls “explainability” as its differentiator, according to Gade.

“It can explain a neural network that can explain a simple logistic regression model… a model created on textual data versus tabular data. It can work across different model formats, like a TensorFlow model format, or a PyTorch model format. It’s very flexible. … And then on top of that, we’re able to process massive amounts of data, so we can monitor these models continuously at scale,” he said.

“If you’re an e-commerce company, you may be making hundreds of thousands or even millions of recommendations on a given day. And you’re logging all of those prediction logs. Fiddler can consume all of those prediction logs, so we can monitor changes in predictions over time. And help you keep track of drift over time. Then it would be able to pinpoint … which features have changed for the last week or day that have caused the models to drift in their behavior,” he said.

In a loan example on its monitoring demo, interest rate, duration of the account, debt to income, and loan amount, could be factors driving the model drift. The user can also drill down to look at outliers and their effect on behavior over time, the effect of common data errors, and service metrics such as average traffic, latency and error rate.

The technology is available as a managed cloud service and as an on-prem version.

The 20-person company has raised a total of $14 million, and recently announced undisclosed investment and a strategic collaboration with Lockheed Martin Ventures to work on development, testing and scaling of the technology for defense and aerospace industries. It has a partnership with Facebook and its Captum model interpretability library for PyTorch, a partnership with the job search marketplace Hired, and investment from the Amazon Alexa Fund.

“Fiddler is building services that are key to the evolution of explainable AI as the company works to demystify the ‘black box’ of AI in industry verticals including financial services and social networking,” said Zain A. Gulamali of the Amazon Alexa Fund. “The Fiddler team is one of the strongest we’ve seen positioned to tackle this challenge, and we are excited to support the company as it continues to grow and address explainable AI in areas such as voice and conversational interfaces.”

It has seven customers in production, including a couple of large banks, according to Gade. Its customers use the technology in a couple of different ways: to test models before they’re launched into production and to continuously monitor models in production.

Feature image by Gerd Altmann from Pixabay.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: fiddler.ai.

A newsletter digest of the week’s most important stories & analyses.