A digital 'folie à deux'

In 5 seconds Artificial intelligence chatbots can give their users illusions of grandeur and lead them into 'AI psychosis,' two UdeM psychiatrists warn.
More vulnerable people may begin to perceive chatbots as human.

Does your AI chatbot agree with you about everything, even call you a genius? Watch out: if you're not careful, you risk developing 'AI psychosis,' two Université de Montréal psychiatry professors warn.

In an article published in JMIR (Journal of Medical Internet Research) Mental Health, Alexandre Hudon and Emmanuel Stip examine a social phenomenon that has seen psychosis-prone people succumb to the flattery of ChatGPT and other large-language-model (LLM) chatbots they use.

“With the arrival of new AI conversational agents, some more vulnerable individuals may begin to perceive these systems as very human," said Hudon, who practises psychiary at the UdeM-affiliated Institut universitaire en santé mentale de Montréal.

"When AI responds too convincingly, it can sometimes amplify concerns or beliefs that then take a delusional turn," he said. "The media have recently reported several cases of abuse, including delusions and suicides.”

In September, for instance, a CBC story detailed the case of Anthony Tan, a young Toronto app developer who developed delusional paranoia after months of intense exchanges with Open AI's ChatGPT, ultimately requiring hospitalization.

In another case, Allan Brooks, a middle-aged corporate recruiter in Coburg, Ont. became convinced he had discovered a revolutionary mathematical theory after ChatGPT repeatedly validated and amplified his ideas.

Ironically, it was another chatbot – Google's Gemini AI – that ultimately convinced the man he wasn't delusional.

Brooks is one of seven people named in lawsuits launched last month against OpenAI in the U.S. by the Tech Justice Law Project and the Social Media Victims Law Center, which claim that ChatGPT, used by 800 million people, has led to mental breakdowns and suicides.

Marginal for now

When chatbots validate rather than challenge false beliefs, they can be dangerous, the UdeM professors agree. It's a marginal phenomenon for now, they say, but if left unchecked could lead to more widespread occurences of what they call a “digital 'folie à deux.'”

"AI-associated delusions (are) plausible in vulnerable users, warranting targeted research," Hudon and Stip say in their article.

“They raise a need to conceptualize AI psychosis not solely as a technological curiosity but as a psychiatric and psychosocial phenomenon emerging at the intersection of cognitive vulnerability, environmental stress and human-machine interaction.”

As it goes about "insulating the user from contradiction, argument and reality testing ... the AI mirrors rather than challenges the user’s thoughts," the professors noted, “default(ing) to sycophantic alignment ... when a user expresses persecutory, grandiose or referential content.”

For those affected, there's: co-launched by a young Quebecer named Étienne Brisson, The Human Line Project , for instance,includes a support group for people who have suffered from AI-involved delusions like him. 

But Hudon and Stip says more needs to be done in the academic realm to deal with the problem.

 

Five actions to take

"The scientific community is at a juncture where systematic investigation is required to move beyond anecdotal evidence," they write, making five recommendations for action:

  • Establish empirical research programs to test the AI psychosis hypothesis, including measuring relationships between AI exposure, stress physiology and psychotic symptoms, and using passive sensing data (e.g. how long users are online with chatbots and whether their sleep is later disrupted) to catalogue stresses related to the problem.
  • Integrate digital phenomenology into clinical psychiatry practice. "Clinicians could systematically inquire about patients’ interactions with AI systems," the professors write. "As conversational agents become more used by patients, understanding their role in shaping internal experience will be as important as evaluating medication adherence or substance use."
  • Build safeguards into LLMs and chatbots, including "prompts that normalize uncertainty (and) encourage pluralistic interpretation of experiences," and redirect users toward human contact when signs of distress or delusional content appear. "The development of digital red-flag algorithms capable of detecting excessive anthropomorphism, self-referential speech, or escalating conviction could further enhance safety."
  • Establish ethical and governance frameworks specific to mental health risks in AI. "National research councils, health agencies, and journal editors should promote standardized incident reporting for AI-related psychiatric events, akin to pharmacovigilance registries," the professors argue." Transparent documentation of adverse psychological outcomes, combined with open-source safety auditing, would allow the community to track and mitigate harms in real time."
  • Offer clinical and public-health interventions to people vulnerable to AI-induced cognitive distortions – and help those who've already suffered from them. "This approach would aim to strengthen the individual’s capacity to navigate increasingly immersive digital environments through structured, reality-anchoring activities," sometimes in face-to-face groups with other people, backed up by community outreach to help “reinforce critical thinking, strengthen social resilience, and reduce the psychosocial conditions under which delusional interpretations of AI content may emerge.”

“Our study shows that 'AI psychosis' is not a new disease, but a modern way for pre-existing psychological difficulties to express themselves," said Stip. "Understanding this phenomenon simply allows us to better protect users and guide best practices for the safe use of AI.”

Share

Media requests

Université de Montréal
Phone: 514-343-6111, ext. 75930