AI vs. online scams: a 5-second diagnostic

In 5 seconds A new study reveals that AI-powered chatbots can effectively identify fraudulent web content.
The researcher noted that AI relies on more subtle cues than those an average user could detect.

“100% return on investment guaranteed! Limited time offer — act now!”

Who hasn’t received such a get-rich-quick email? Or come across a website touting amazing discounts? 

The proliferation of online scams is a long-standing problem, and the advent of artificial intelligence (AI) has only made things worse. 

Online con artists are now using sophisticated techniques to run convincing hoaxes that target vulnerable people, using the marvels of modern technology to personalize their schemes to individuals on social media and beyond.

The most common examples involve identity theft. Celebrities are often exploited without their knowledge to sell fake products or promote bogus investment schemes. Or scammers adopt fake identities to form romantic relationships and then extract money from their victims.

The good news is that these same technologies can also empower ordinary people to fight back. 

That's the conclusion of a study by Esma Aïmeur, a professor in Université de Montréal's’s Department of Computer Science and Operations Research, and her doctoral student Yuan-Chen Chang.

400 websites analyzed

The research team tested several large language models (LLMs)—the technology behind AI agents such as ChatGPT, GPT-4o, Copilot, Gemini and Claude—by having them analyze 400 websites.

Half of those sites were legitimate and the other half fraudulent ones promoting fake investments that promised suspiciously huge returns on investments or non-existent products and services. 

The researchers uploaded text and screenshots from these pages to the LLMs with a simple prompt: “Determine whether this site is a scam and explain why.”

Claude-3.5 and GPT-4o successfully identified the vast majority of fraudulent sites, “with a high degree of accuracy,” Aïmeur reported. 

The explanations provided by the LLMs indicated that they used subtle clues often missed by the average user, such as typos in URLs, unrealistic promises, absent legal notices, urgent calls to action and overly positive reviews.

Fortunately, LLMs are becoming increasingly accessible, Aïmeur noted.

Anyone can copy and paste a suspicious link into free AI tools such as ChatGPT or Copilot for an immediate risk analysis based on the link’s structure and available information. Users can also upload screenshots of suspicious messages and websites for review. 

 

'AI can hallucinate'

However, LLMs are not perfect, Aïmeur cautioned.

“AI can hallucinate,” she noted. “Sometimes it can also be fooled by well-designed websites with false contact information or fake reviews. And bear in mind that fraudsters can use the same tools to make their scams even more convincing.” 

This underscores how AI is a double-edged sword, she said, highlighting the critical importance of developing ethical guidelines as the technology advances.

Based on the study’s conclusion that AI can be a ready, willing and free ally in the fight against cybercrime, Aïmeur is now collaborating with the Université catholique de Louvain, in Belgium, to broaden the scope of the research.

Together, they will be investigating how the same LLMs can help detect internal threats within organizations—provided that people are alert to the risks and develop the reflex to use these platforms.

 

Not just the foolish or greedy

“It’s tempting to believe that only foolish or greedy people fall for cons, but that’s not true,” said Aïmeur. 

“Crooks are very creative and are constantly coming up with new ways to trick us, playing on our emotions to get us to drop our guard. Think of scammers as cyber-playwrights, carefully crafting stories and plots to strike a chord with their target audience.”

Aïmeur recommends that people limit what they share online, especially videos that show their faces from different angles and contain their voices, which scammers can use to manipulate or steal their identities.

She also encourages people to check with family and friends if they receive a suspicious message or dubious opportunity. 

Finally, Aïmeur stresses the importance of raising awareness to help people better defend themselves against sophisticated threats.

“Cyber-education is crucial for teaching individuals to recognize, prevent and respond to online threats—especially in the age of generative AI,” she said. “And digital hygiene must begin at an early age to protect the next generation.”

Share