We’ve witnessed growing political upheaval in the last few years, thanks to the proliferation of “fake news” online — much of which is disinformation that’s deliberately designed to spread virally and mislead public opinion for political purposes. Research shows that on average, a false story will disseminate to 1,500 people six times more quickly than a factual one, especially if it’s related to politics, and particularly when bots are used to further automate the propagation of fake news.
It’s no wonder then, that many experts are looking for ways to keep up with the deluge of falsehoods by automating the fact-checking process. So far, we’ve seen algorithms that can assess the truthiness of one article against external sources, while other tools will evaluate whether entire news sites are consistently factual or not, or learn how to detect convincingly written fake news by first generating it. Canadian researchers over at the University of Waterloo are now adding another piece to the puzzle with a fake news detection tool that uses deep learning AI algorithms to verify whether the claims made in a news article is supported by other articles on the same subject. If the claims are not substantiated by other articles — especially by those from reputable news sources — then it’s likely to be false information.
“There are enormous amounts of content being put online every day, and it is humanly impossible to sift through all that content to identify fake news and false information,” said Alexander Wong, a professor of systems design engineering and one of the authors of the paper, along with Chris Dulhanty, Jason Deglint, and Ibrahim Ben Daya. “By building an AI that can automatically check claims made in posts or stories against other posts and stories, it can be a powerful tool for augmenting human fact-checkers by flagging potential fake news and false information, thus allowing them to fact-check faster and more reliably in the fight against the spread of disinformation.
Automating Stance Detection
As the team notes, an automated fact-checking system might include several different sub-tasks such as retrieving documents from a variety of sources that might either support or contradict a claim; detecting the stance of each article in regards to the purported claim; assessing the reputation and therefore the trustworthiness of each article; and claim verification, which evaluates both stance of the article and reputation of the source to establish how truthful the claim is.
In particular, the team’s work focuses on a key area of research called stance detection. Given a claim in a news story, as well as other stories on the same subject, the team’s system can then determine whether these external sources support or refute the principal claim, with an impressive 90% accuracy rate. By predicting whether other articles support the target claim or not, human fact-checkers can then quickly and easily deduce whether the article in question is false reporting or not. In contrast, previous approaches relied on manual tuning that was both time-consuming and insufficient in incorporating the more complex aspects of identifying disinformation.
The system was developed using a large-scale, neural language network AI that was based on a pre-trained, open source, deep bidirectional transformer language model for natural language processing (NLP) known as RoBERTa (Robustly Optimized BERT Approach). Crucially, that means the system is capable of analyzing text sequences both left-to-right and right-to-left, allowing it to gain a deeper understanding of language context and flow, in contrast to unidirectional models. In addition, the team’s system was trained using benchmarks gleaned from the Fake News Challenge dataset.
“We were surprised at how well [our model] was able to learn the nuances and linguistic relationships between claims and articles to reliably determine if claims are indeed supported by articles,” Wong told us. “This just speaks to the power of the recent significant advances in large-scale neural network language AI and their ability to capture language.”
For now, the team is working to improve the training of their model by providing it with a much larger amount of claims and stories, with a particular focus on media outlets outside of the English language and the Western world, so that it can better understand linguistic and cultural differences in news reporting from around the world.
As one can imagine, developing more effective AI tools will be vital in the ongoing global fight against disinformation, said Wong: “This work demonstrates that powerful automated screening tools can be built to curb the ever-increasing amounts of fake news and false information floating around the internet and social media that can lead to real societal, ethical, and even physical harm around the world. Getting these tools in the hands of journalists and fact-checkers can empower them to inform themselves, as well as the general public about the truth behind all the claims and headlines out there.”
Feature image by Kayla Velasquez via Unsplash.