TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%

MIT’s New AI Tackles Loopholes in ‘Fake News’ Detection Tools

Confronted with apparent flaws in existing 'fake news' detectors, a team of researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new algorithm that actually fact-checks a news piece against an external source, in addition to addressing hidden biases that might cause the algorithm to incorrectly flag a truthful statement as false.
Oct 25th, 2019 12:22pm by
Featued image for: MIT’s New AI Tackles Loopholes in ‘Fake News’ Detection Tools

The spread of false information has emerged as a pervasive force in recent years — upsetting elections, disrupting democratic societies, and further dividing people into fractious groups, stubbornly entrenched in destructive “us-versus-them” ideologies. With an estimated 20% to 38% of news stories shared on social media platforms being deemed as bogus, disinformation has, unfortunately, become the new norm, and it’s getter harder and harder to discern the truth from the bits of “fake news” floating around — whether it’s read in the written word, or seen in photographs or moving images.

To counter the problem, experts have come up with a variety of AI-powered “fake news” detectors. Some of these tools identify deceptive news articles by analyzing whether the news source itself has been deemed consistently truthful, while other tools learn how to sniff out machine-generated disinformation by generating it first and then learning from it. Of course, these approaches still present a number of loopholes, as they don’t evaluate the underlying veracity of a piece, or mistakenly assume that all machine-generated texts are automatically false — even when the generated text incorporates truthful facts.

Confronted with these apparent flaws in existing “fake news” detectors, a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new algorithm that actually fact-checks a news piece against an external source, in addition to addressing hidden biases that might cause the algorithm to incorrectly flag a truthful statement as false.

As the study’s lead author and CSAIL Ph.D. student Tal Schuster explained to The New Stack: “Some of the current detectors rely on identifying the source of a text as a surrogate to determine if it’s fake or real. However, there are two issues with that approach: [first], the same source, whether it’s a human reporter or an automatic text generator, can create both truthful and wrong, misleading articles. Identifying the source of the text won’t help to distinguish between those two. [Secondly], users that want to generate fake news, can imitate other sources in order to avoid detection.”

Improving Automated Fact-Checking

The team points out in their first paper that these presumptions also overlook legitimate uses of such automatic text generators, such as auto-completion, text modification, question-answering, text simplification and summarization. Texts generated by machines in such situations would automatically be tagged by detectors as false, even if their content is true.

To prove their point, the team used an automatic text generator to create a summary of an article about NASA scientists collecting data on coronal mass ejections. Despite the factual accuracy of the generated summary, a “fake news” detector marked it as providing false information, demonstrating that it could not tell whether machine-generated text is conveying true or false data.

Having shown that this loophole exists, the team then turned their attention to analyzing any potential biases that may exist in FEVER (Fact Extraction and VERification), a dataset of human-annotated true and false statements that are cross-checked with data from Wikipedia articles, and which is often used by machine learning researchers. The team’s analysis showed that many of the false statements annotated by humans contained “giveaway” phrases like “did not” and “yet to.” Thus, models that were trained on the FEVER dataset then ended up sometimes incorrectly labeling statements containing these words as false — even when they were actually true — since they evaluated only the phrasing of the text, rather than fact-checking the statement’s veracity with an external source.

So to test this further, the team then created a de-biased set of data based on FEVER. When run through fake news detection models, they were surprised to find that their accuracy decreased from 86% to 58%, which the researchers believe is due to the models’ initial training.

“Fact-checking datasets are difficult to find,” explained CSAIL Ph.D. student and paper co-author Darsh Shah. “The ones which exist and are manually created, such as FEVER, are extremely biased. Thus, models trained on such datasets are also biased and can not be applied to the real world. These models are brittle and would be easy to fool.”

We know that hidden biases can influence machine learning models and affect human lives profoundly, so it’s understandable why unbiased fact verification is becoming increasingly urgent in mass media. To address this problem, the team then created a new fact-checking algorithm that was trained on their de-biased dataset, which is described in their second paper. “[Our] de-biasing algorithm… re-weights dataset instances so that the impact of giveaway phrases is reduced,” said Shah. “Trained using this method, models consistently outperform previous approaches, especially in realistic settings.”

In the future, the team suggests that larger, unbiased training datasets are needed to help experts develop better and more robust automatic ‘fake news’ detection models, preferably ones that can adapt to different languages and subjects, and use evidence from a wide range of reputable sources beyond Wikipedia. For now, the team has made their AI models available online, in the hope that other researchers will incorporate it into their projects.

“This work is an eye-opener to all other works in fact-checking,” said Shah. “High performance on datasets would not necessarily imply real-world applicability. Apart from this necessary re-evaluation, which our work points to, our de-biasing algorithm could also be an option to reduce the effect of potential biases to make sure that the final model can perform fact-checking well in the wild.”

Read the team’s first paper and second paper, and find their model here.

Images: Marcus P. via Unsplash

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.