A dubious link from a friend. A headline too sensational to be true. A video that seems fake but you can’t be sure. As online misinformation grows harder to detect, new artificial-intelligence tools promise to help us separate fact from fiction. But do they actually work?
Not really, according to Dorsaf Sallami. For her doctoral research at Université de Montréal’s Department of Computer Science and Operations Research, she examined the limitations of AI systems designed to detect fake news.
Her conclusion: these tools have significant flaws that their technical performance often masks.
She detailed her findings in a paper published last fall in the proceedings of an international conference on AI, ethics and society, co-authored with her supervisor Esma Aïmeur and professor Gilles Brassard.
A mirror, not a fact-checker
“Current AI systems for detecting fake news are built on a fundamental misconception,” Sallami said. “When AI flags content as false, it doesn’t fact-check as a journalist would. It calculates probabilities based on its training data.”
In other words, these systems don’t check the facts against reality. They only reflect what they’ve been shown, like a mirror, complete with all the biases and gaps in their training data.
Sallami finds it paradoxical that tech giants are pouring resources into these tools. Meta is labelling content that passes existing fact-checkers, Google has launched a Gemini-based prototype, and X is using Grok to analyze information on its platform in real time.
“The arsenal is impressive, but what good is a system that boasts 95 per cent accuracy in the lab but fails under real-life conditions, especially if it violates users’ privacy, is biased against some media outlets, and can be weaponized to censor political opposition?” Sallami asked.
Effectiveness is typically measured against technical benchmarks under controlled conditions. It’s a bit like judging a car by its top speed, without considering safety, affordability or emissions, she said.