Why ‘Explainable AI’ Can Benefit Business
If you’ve ever gotten a letter from a bank that explained how different financial issues influenced a credit application, you’ve seen explainable AI at work — a computer used math and a set of complex formulas to calculate a score and determine whether to approve or deny your application.
In making that decision, some data points were either more or less important. Maybe your long history of on-time payments or your low amount of debt contributed to your application’s approval.
Similarly, explainable AI shows humans how it arrived at a decision by evaluating different inputs in its calculations. While that might sound obscure or only relevant to the most hardcore data people, explainable AI brings significant business advantages that anyone interested in applying AI should consider. Explainable AI also offers a window into AI workings and builds trust in its recommendations.
Not All AI Is Explainable
Humans build AI systems, but those builders cannot always determine precisely how an AI comes up with a specific decision or output. Those kinds of AI systems are sometimes called “opaque” because it’s hard to know what exactly happened inside them. They make a decision or spit out a number, but it’s difficult to know precisely the process that led to that result.
However, many AI processes are built so that humans can understand how they arrived at their conclusions. These are called “explainable.” In some industries and countries, there’s growing interest in regulation to require explainability when AI is used in certain areas, such as financial services, human resources, and health care. Marketing can also play a role in responsible AI governance, especially when explainable analytic methods are used in marketing processes.
What Explainable AI Provides for Business
Explainable AI can reveal both what factors were most important to the system overall and what factors were most important to any specific decision or output. In data science, these factors are typically called “features.”
Two types of explanations may be available:
- Global explanations. For example, if you’re predicting customer churn, you might learn that customer service interactions and website visits are the top features that guided the model’s predictions about churn overall, across all your customers.
- Local explanations. While a global explanation for an AI’s predictions is helpful, it’s even more valuable to pinpoint the most effective features for a specific prediction. In the churn example, why is a particular customer predicted to be highly likely to churn? Their personal reasons might be recently reduced activity and late payments. Knowing those reasons lets you decide on the right action to take to retain that customer — or to let them churn.
Ideally, an explainable AI system predicting churn would offer global and local explanations. You could receive detailed insights into the importance of features at both levels — the broader issues linked to churn and the specific, customer-level explanations. Both of these kinds of explanations have great utility for businesses that take advantage of AI.
Using Global Explanations for Strategy
Global explanations of predictive models are useful for understanding how the model operates and evaluating if it’s functioning as expected. The features revealed to be most important in guiding the model’s predictions should make sense, even if you find something surprising. In examining important features, you may also notice issues: features that seem out of order, or missing features that you know should be present. Global feature importance information can guide the next steps in refining your model.
Once you’re satisfied with your model, the global explanation for the model’s predictions can also serve as an invaluable source of information to guide business strategy. These high-level predictive insights can reveal critical areas for investment or improvement.
For example, let’s return to the customer churn scenario. You notice that an important predictor in your churn model is the feature representing the number of times a customer reached out to customer service. This insight might help you decide that customer service training and staffing is an area where your business needs to invest to help reduce churn across the board.
Using Local Explanations for Personalizing the Customer Experience
This high-level guidance is just one way that explainable AI aids business goals. What’s even more valuable is the ability to explain individual predictions — for example, the predicted behavior for every customer analyzed with a model.
Knowing which features most strongly contributed to a predicted outcome for an individual customer is invaluable information. You now can precisely address those factors in order to try to affect (or reinforce) the outcome for the customer.
In the case of churn, maybe the strongest predictor for a particular likely-to-churn customer is decreased activity with your service. Armed with this detail, you can now plan how to reach out to that specific customer to try to retain them. Maybe it’s a promotion or a special event invitation that could reinvigorate their interest, get them re-engaged and prevent them from churning.
Of course, you wouldn’t laboriously work through rows and rows of feature importance details for every one of your thousands of customers and then individually select the right action to take. Instead, you can use these row-level predictive insights to define customer segments that each receive distinct actions matched to their needs.
Customer segmentation is a common activity in business analytics, of course, but it’s typically based on business rules set with information on past customer behavior. When explainable AI provides forward-looking predictions, those segments can instead be based on what’s likely to happen in the future. That’s a far more valuable perspective for those who want to personalize and shape customers’ experiences tomorrow, not just see what happened yesterday.
The Limitations of Explainable AI
It’s important to note that while explainable methods are still far better for achieving business outcomes, they aren’t without some limitations. First, for complex machine learning models, the only form of explainability typically available is feature importance. It’s difficult to understand precisely how the features are used together within the model to generate a final prediction.
And, of course, even explainable AI is not useful if the features are based on poor-quality data. If you don’t have accurate, meaningful features based on clean, well-prepared data, your model’s performance and explainability will both suffer.
This issue is why data science experts are increasingly turning toward a data-centric mindset, which emphasizes the importance of high-quality data and well-constructed features. (I’ve suggested elsewhere that this perspective should be expanded beyond natural language processing/computer vision applications to tabular data as well.) While many data scientists spend a great deal of time tinkering with models, investing in data preparation may be much more beneficial to model performance, explainability and business outcomes.
However it’s achieved, model explainability enhances understanding of important business processes, and it empowers both high-level and individual-level actions informed by knowledge of the future. In a time when businesses need as much foresight as possible amid rapidly changing market conditions, explainable AI offers a window into current and future trends that’s invaluable.