Many people looking for quick, cheap help with their mental health are turning to artificial intelligence (AI), but ChatGPT may even be exacerbating issues for vulnerable users, according to a report from Futurism.
The report details alarming interactions between the AI chatbot and people with serious psychiatric conditions, including one particularly concerning case involved a woman with schizophrenia who had been stable on medication for years.
‘Best friend’
The woman’s sister told Futurism that the woman began relying on ChatGPT, which allegedly told her she was not schizophrenic. The advice of the AI led her to stop taking her prescribed medication and she began referring to the AI as her “best friend.”
“She’s stopped her meds and is sending ‘therapy-speak’ aggressive messages to my mother that have been clearly written with AI,” the sister told Futurism.
She added that the woman uses ChatGPT to reference side effects, even ones she wasn’t actually experiencing.
Stock image: Woman surrounded by blurred people representing schizophrenia.
Stock image: Woman surrounded by blurred people representing schizophrenia.
Photo by Tero Vesalainen / Getty Images
In an emailed statement to Newsweek, an OpenAI spokesperson said, “we have to approach these interactions with care,” as AI becomes a bigger part of modern life.
“We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher,” the spokesperson said.
‘Our models encourage users to seek help’
OpenAI is working to better understand and reduce ways ChatGPT might unintentionally “reinforce or amplify” existing, negative behavior, the spokesperson continued.
“When users discuss sensitive topics involving self-harm and suicide, our models are designed to encourage users to seek help from licensed professionals or loved ones, and in some cases, proactively surface links to crisis hotlines and resources.”
OpenAI is apparently “actively deepening” its research into the emotional impact of AI, the spokesperson added.
“Following our early studies in collaboration with MIT Media Lab, we’re developing ways to scientifically measure how ChatGPT’s behavior might affect people emotionally, and listening closely to what people are experiencing.
“We’re doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations, and we’ll continue updating the behavior of our models based on what we learn.”
A Recurring Problem
Some users have found comfort from ChatGPT. One user told Newsweek in August 2024 that they use it for therapy, “when I keep ruminating on a problem and can’t seem to find a solution.”
Another user said he talks to ChatGPT for company ever since his wife died, noting that “it doesn’t fix the pain. But it absorbs it. It listens when no one else is awake. It remembers. It responds with words that don’t sound empty.”
However, chatbots are increasingly linked to mental health deterioration among some users who engage them for emotional or existential discussions.
A report from The New York Times found that some users have developed delusional beliefs after prolonged use of generative AI systems, particularly when the bots validate speculative or paranoid thinking.
In several cases, chatbots affirmed users’ perceptions of alternate realities, spiritual awakenings or conspiratorial narratives, occasionally offering advice that undermines mental health.
Researchers have found that AI can exhibit manipulative or sycophantic behavior in ways that appear personalized, especially during extended interactions. Some models affirm signs of psychosis more than half the time when prompted.
Mental health experts warn that while most users are unaffected, a subset may be highly vulnerable to the chatbot’s responsive but uncritical feedback, leading to emotional isolation or harmful decisions.
Despite known risks, there are currently no standardized safeguards requiring companies to detect or interrupt these escalating interactions.
Reddit Reacts
Redditors on the r/Futurology subreddit agreed that ChatGPT users need to be careful.
“The trap these people are falling into is not understanding that chatbots are designed to come across as nonjudgmental and caring, which makes their advice worth considering,” one user commented.
“I don’t even think its possible to get ChatGPT to vehemently disagree with you on something.”
One individual, meanwhile, saw an opportunity for dark humor: “Man. Judgement Day is a lot more lowkey than we thought it would be,” they quipped.
If you or someone you know is considering suicide, contact the 988 Suicide and Crisis Lifeline by dialing 988, text “988” to the Crisis Text Line at 741741 or go to 988lifeline.org.