Machine Learning

Researchers Build an ‘Interpretable’ AI That Shows How It Thinks

21 Nov 2019 11:00am, by

The use of machine learning is increasing as automation becomes more widespread in our workplaces, financial institutions and even courts of law — telling us whom to hire, whom to lend money to, and who might re-offend. But it’s becoming painfully clear that these complex algorithms can conceal any number of hidden biases — leading them to inadvertently discriminate against people based on their gender or race — oftentimes with terrible, life-changing consequences. The problem is that such AI systems are notoriously opaque; more often than not, the mechanisms and reasoning behind their predictions aren’t immediately apparent, even to the people who created these systems.

So it’s little wonder that a growing number of experts are now working to build what is called “interpretable” or “explainable” AI, where the processes that underlie machine predictions are made more transparent and therefore, also more understandable (at least by us humans). In aiming to better understand how and why machines classify images the way they do, one research team from Duke University created a new deep learning neural network whose reasoning process can be deconstructed, analyzed and understood more easily than comparable models. In particular, they focused on using thousands of different bird images to train their AI, so that it would not only correctly identify various bird species, but also “show” the steps it took to arrive at its conclusion.

“Our goal was to design deep neural networks for image classification in computer vision that are not black boxes,” explained Cynthia Rudin, a professor of computer science at Duke University and head of the Prediction Analysis Lab who directed the research. “These networks use a form of case-based reasoning, where they reason about a current image in terms of its parts, and how similar these parts are to parts of prototypical past cases within its memory.”

Network architecture of ProtoPNet.

For their experiments, the team’s paper explains how they utilized a convolutional neural network (CNN), a type of deep learning neural network that emulates the way the human brain functions, and which is often used in image and video recognition and classification, recommendation engines, medical image analysis, and natural language processing. This network, dubbed ProtoPNet, was then fed over 11,000 images of 200 bird species, ranging from sparrows, woodpeckers and hummingbirds. This was done without the researchers explicitly telling the model the identifying characteristics of each species, whether it be a certain shape to the beak, or color to the feathers. Instead, the network learns on its own to pick out prominent visual patterns as ‘prototypical’, which then permits it to compare future images and traits against what it has previously seen and identified before.

“For instance, the network might explain that an image of a bird contains a clay-colored sparrow, because the head of that bird looks like the head of a prototypical clay-colored sparrow that it has seen before, and because the feather pattern on the wing of the bird looks like the wing pattern of a different prototypical clay-colored sparrow,” said Rudin.

Diagram showing how ProtoPNet lays out the reasoning behind its identification process.

Another useful feature that the team came up with is the use of “activation maps,” which resemble color-coded infrared heat maps and show which parts of the image correspond the most to avian features it has already seen before. It’s in this fashion that the model shows its internal process of reasoning in real-time, making it an “interpretable” AI, in contrast to “black box” AI where analysis is done after the fact, noted Rudin: “[T]he network reasons by explaining that ‘this’ part of the image looks like ‘that’ part of another image it has seen before.”

“Activation maps” showing which parts of an image correspond most to previously learned bird species characteristics.

In the course of their tests against other AI models that didn’t have this feature for interpretability, the team found that their network was able to identify the correct species up to 84 percent of the time, meaning that it performed just as well as similar networks — in addition to having the advantage of transparency.

For the time being, the team has made their code publicly available for other researchers, in addition to now adapting their work for medical imaging purposes.

“Neural networks are starting to be used widely for radiology problems, but it’s not always clear whether these models are trustworthy for a particular prediction,” Rudin told us. “Some of these problems are very difficult, even for doctors — such as tumor identification in mammography — so it could be useful to have a dialogue between the computer and the human as to what the state of the patient actually might be, and whether to order a biopsy. We are hoping the interpretable neural networks can explain why they are making a decision so that a doctor knows whether or not to trust it. Also, the machine might see something that the doctor doesn’t, so it is useful to have both perspectives — the human and the machine.”

Images: Duke University

A newsletter digest of the week’s most important stories & analyses.