Machine Learning

How Human Trust Varies with Different Types of ‘Explainable AI’

31 Jan 2020 12:00pm, by

Autonomous machines — powered by artificial intelligence — are increasingly making their presence known in all corners of our lives, whether it’s doing automated fabrication on the factory floor, driving us to our destination, delivering goods, or even assisting with precision surgery.

But despite all these remarkable instances of how advances in AI and robotics are making our lives easier, many of us are still leery about trusting machines, much less entrusting them with our lives. After all, many experts bemoan the fact that AI often functions like a “black box” of sorts; it automatically makes crucial decisions without really revealing the why and how behind those choices. Not surprisingly, this kind of opaqueness can lead to serious problems like algorithmic bias coming out of the woodwork, thus reinforcing that sense of distrust.

If human trust in intelligent devices is to be cultivated, then it’s imperative that this lack of transparency behind machine predictions be addressed  — thus the growing need for so-called explainable AI (XAI). But as researchers at the University of California, Los Angeles (UCLA) recently demonstrated, that trust is not only gained by providing humans with a clearer explanation of why a machine does what it does, but also by the way that information is presented.

Not All Explanations Are Created Equal

As Mark Edmonds, a UCLA doctoral student in computer science and lead co-author of the paper published in Science Robotics told us, any domain where a computer is making critical decisions regarding human safety would benefit immensely from explainable AI.

“But if people do not trust these systems, they may be much less likely to adopt them,” he cautioned. “Thus, the goal of explainable AI is to produce AI systems that are more interpretable, transparent, and trustworthy, which are easier to diagnose and fix than black-box models, and are therefore more robust and reliable.”

To begin their study, the team first got a human demonstrator to train a robot to open a pill bottle with a safety twist cap, using a tactile glove. The sensors in the glove record the positioning and forces of the hand, and that data is translated in two different model representations: first, an “action grammar” model that breaks down the step-by-step structure of the task; and secondly, a haptic model that predicts the next steps the robot should take, based on the robot gripper’s force-sensing feedback. The team found that the robot had the most success in opening the pill bottle by combining the grammar and haptic models.

The second part of the team’s study looked at what types of explanations of machine decisions humans would consider more trustworthy. To do this, the team asked 150 UCLA students to gauge the trustworthiness of the robot performing this bottle-opening task, using different explanations. To establish a baseline, all 150 students were first shown a video of the robot opening the bottle. Then, the groups were given different explanations of the robot’s actions: one using the baseline with no additional explanation; one using a symbolic action sequence; another using haptic information about the poses and forces; and another that blended both symbolic and haptic explanations. These representations were also compared with a text-only description of the robot’s actions.

Overview of the stages of the experiment, showing demonstration, learning, evaluation, and explainability.

A visual representation of the “action grammar” model, using data from human demonstrations.

The team found that those who were presented with the combination of the symbolic and haptic explanations trusted the robot the most, while those who got no explanation at all trusted the robot the least. In follow-up testing, the team also found that those who trusted the robot the most were also the ones who could more accurately predict what steps the robot would take next when opening another new pill bottle. Surprisingly, the team discovered that the text-based explanation did little to cultivate human trust, and in fact did not differ significantly when compared against the baseline video explanation.

Based on their findings, the team believes there needs to be a greater emphasis on finding ways to increase human trust when it comes to AI, especially ones that are more comprehensive and occur in real-time.

“For future work, the field should further examine and prioritize making models that are both performant and provide explanations,” said Edmonds. “This is in stark contrast to most current AI that only focus on performance. Our work highlights the need to make both a priority in model design. Providing an understanding of what humans find trustworthy will ease AI’s adoption into society and our daily lives. Humans are likely to want to trust a system before it takes over safety-critical activities. Trust is also fundamental for collaboration: if two AI agents don’t trust each other, why would they work together?”

Feature image by silviarita from Pixabay; other images: UCLA

A newsletter digest of the week’s most important stories & analyses.