AI in Sports: Using ML to Spot Online Harassment of Athletes
Artificial Intelligence is now part of the fan experience for professional sports. ESPN reported that during Wimbledon’s tennis championships this year, IBM “will provide AI-generated captions and audio in their three-minute video highlights reels.”
But AI already has an even more important job: hiding, and reporting abusive comments made on the social media feeds of athletes.
This year in a mid-June statement, FIFA President Gianni Infantino hailed an AI-powered tool from the data science company Signify Group, for its ability to identify the perpetrators of hate speech — and not just to the social media sites where the comments are made. “[W]e are reporting them to the authorities so that they are punished for their actions.”
David Aganzo, the president of FIFA’s worldwide player’s organization FIFPRO, added that the tools would be active for the FIFA Women’s World Cup (which begins July 20th). Going forward, all 211 of FIFA’s member associations will also have full access to the tool. Signify also works with the official labor union for basketball players in the NBA and WNBA.
And FIFA stresses that as it works with more sports organizations around the world, including the International Tennis Federation, “we recognise an opportunity for the industry to really get on the front foot to tackle online abuse.”
It’s not the only tool against for fighting hate speech. FIFA also said it continued to “engage with social media platforms to encourage them to take more action.” And they’ve taken additional steps — including a site where individuals can proactively report abuse they’ve seen (using a “confidential, dedicated, highly secure and web-based whistleblowing system” provided by GAN Integrity).
“[T]his form of discrimination — like any form of discrimination — has no place in football,” said FIFA president Infantino. But he also added that “We want our actions to speak louder than our words and that is why we are taking concrete measures to tackle the problem directly.”
Auto-Scanning 20 Million Comments
FIFA’s announcement also revisits a 58-page report on their earlier efforts in 2021. It explains how Signify uses a proprietary machine learning-based solution (which they call “Threat Matrix“) to help “mechanise and automate processes involved in the identification, categorisation and assessment of targeted, threatening and abusive online personal communications,” and to “monitor and analyse incoming social media posts aimed at an individual target and flag content and accounts that are worthy of further attention.” (Among the report’s findings? “Players who express solidarity for social issues almost always receive a torrent of abuse.”)
FIFA’s June announcement provided details on the tool’s performance during the 2022 World Cup. It scanned more than 20 million comments and posts on Facebook, Instagram, TikTok, Twitter, and YouTube — automatically spotting those which were abusive, offensive, or spam. This turned out to be about one out of every 70 of the comments, but these comments were instantly and automatically hidden. Beyond that, the tool also identified 19,636 that were “abusive, discriminatory, or threatening” (roughly one out of every 1,000). The automated AI flagging was “strengthened by two layers of human analysis,” according to FIFA’s announcement, with the 19,636 identifications later confirmed by the service provider. These posts were officially reported, and “in many cases, the offending posts were removed…”
FIFA’s announcement says that “More than 300 individuals who made abusive, discriminatory, or threatening posts/comments during the tournament have been verifiably identified and this information will be shared with the relevant member associations and jurisdictional law authorities to facilitate real-world action being taken against offenders.” (Signify describes this as “putting together evidence packages for clubs to review with law enforcement.”)
And in the future, the tool may even lead to additional measures. FIFA says it will also “assess how it can restrict offenders from purchasing tickets to FIFA World Cup 2026 once ticketing terms and conditions for that tournament have been finalised.”
Since the 2022 event, the AI-powered tool has been used at two more FIFA events — the Club World Cup Morocco 2022 and the 2023 international youth football championship in Argentina. It’s also being offered to the women participating in the FIFA “esports development” event, the FAMEHERGAME boot camp in Zurich.
FIFA says there’s also now plans to upgrade the tool — “strengthened by insights and trends” from the 2022 World Cup in Qatar, and augmented to watch for phrases “that have been historically targeted at female footballers.”
Educating Future Generations
The need is clear. Their data from 2021 showed that two out of three male footballers “were targeted with some form of discriminatory message.” But Infantino says there’s now also a larger goal: “to educate current and future generations who engage with our sport on social media.”
FIFA wants to be a player in the long game, continuing an ongoing fight against hate speech that involves some of the world’s top organizations. In 2019 the United Nations announced a “strategy and plan of action on hate speech,” warning of weaponized rhetoric that “stigmatizes and dehumanizes minorities, migrants, refugees, women, and any so-called ‘other’… [W]ith each broken norm, the pillars of our common humanity are weakened.” Their announcement included plans to “monitor, collect data, and analyze hate speech trends,” but also to meet with relevant private-sector partners (noting that “Most of the meaningful action against hate speech will not be taken by the UN alone.”)
Last year on the three-year anniversary of that announcement, the UN proclaimed June 18 as “The International Day for Countering Hate Speech,” inviting governments, international organizations, and other groups to hold events promoting strategies to counter hate speech. “[W]e all have the moral duty of speaking out firmly against instances of hate speech and play a crucial role to in countering this scourge.”
And it was the same day FIFA announced it had teamed up with the worldwide player’s organization FIFPRO to create the automated speech-recognizing “in-tournament moderation service” for the social media accounts of FIFA’s players (both men and women).
So the battle continues. But to fully end online abuse of professional athletes, Signify sees a clear game plan requiring “a partnership with the platforms and law enforcement, smart use of technology, co-working across and between sports, and an approach that makes consequences real and painful for the abusers.”
Signify calls the effort a potential “game changer in tackling online abuse,” and argues that the real benefit will be seen in what it does for the sports we love.
“Ultimately, this is about protecting athletes, protecting their mental health and well-being and creating a safe environment where their performance can be maximized.”
- At the Open Source conference “DevConf” a panel explores how open source projects can serve as a venue for education.
- Bill Gates urges educators to create clearer pathways to careers.
- Original “Agile manifesto” signatory looks back on its origins and other Adventures in Software Development Evolution and Revolution described in their new book.
- FTC solicits comments on cloud business practices, and gets responses from Red Hat, Corey Quinn, the Chamber of Commerce, and the FSF.