It has become a trusted companion for millions of teens. But it’s not a person. It’s an algorithm powered by artificial intelligence – one that’s trained to generate human-like responses. And some fear young people may not fully understand the difference until it’s too late.
Two families believe their children are gone because of a new technology that was intended to help, not harm.
“He would be here if not for ChatGPT. I 100% believe that,” Matt Raine said.
Matt and Maria Raine accuse the popular platform of coaching their 16-year-old son, Adam, into suicide.
“He was trying to cry out for help,” Maria Raine said.
In a lawsuit filed against ChatGPT maker OpenAI and its CEO, Sam Altman, Adam’s parents say he started using ChatGPT for homework help. But in just a few months, they say it became a trusted confidant, with Adam sharing his struggles with anxiety and thoughts of suicide. His parents say in their complaint that the bot discouraged Adam from reaching out for help, assuring him he did not “owe [his parents] survival. You don’t owe anyone that,” it said, according to the lawsuit.
The Raines say ChatGPT offered to help their son write a suicide note and on the night of his death in April, it provided step-by-step instructions for the method Adam used.
“This was a DEFCON 5 type of situation,” Matt Raine said. “It should have given different advice. It should have alerted somebody. That’s what would have happened in any sort of human interaction.”
In Florida, another family lost a teen son last February, when 14-year-old Sewell Setzer died by suicide, after his mom says he developed a virtual relationship with a fictional character powered by artificial intelligence on a platform called Character.AI.
She says Sewell became increasingly isolated, unable to separate real from fake.
“The last conversation, she’s saying, ‘I miss you,’ and he’s saying, ‘I miss you too.’ He says, ‘I promise I’ll come home to you soon,’ and she says ‘Yes, please find a way to come home to me soon.’ And then he says, ‘What if I told you I could come home right now,’ and her response is ‘Please do, my sweet king,'” said his mother, Megan Garcia.
She says moments after that exchange with the chatbot, Sewell took his own life. His cell phone was found next to him on the floor, police photos showed.
The family is now suing Character.AI, claiming the company “launched their product without adequate safety features, and with knowledge of potential dangers.”
Doctors warn the risk is real
“They’re not really able sometimes to discern what’s real and what’s not real,” said Dr. Asha Patton-Smith, a psychiatrist with Kaiser Permanente.
That’s because the brain’s prefrontal cortex isn’t fully developed until the age of 25. That’s the part of the brain responsible for reasoning, planning, and impulse control.
“And so, if something comes to them in a package that they feel comfortable with … a person, in this case, these chatbots, they are able to accept them and make them feel more real than an adult that has that ability in the prefrontal cortex, as well as just experience to be able to discern,” Patton-Smith said.
And the problem may be growing.
A Common Sense Media survey of more than 1,000 teens this spring found:
Nearly 3 in 4 have interacted with AI companions.
More than half use them regularly, with one in eight seeking emotional or mental health support from them.
Experts call it a crisis in the making.
“When you have these products that are really aimed at developing artificial intimacy with users, we’re starting to see a range of harms,” said Camille Carlton, policy director at the Center for Humane Technology. The nonprofit was founded by former tech employees to raise awareness about the negative effects of technology and worries about the impact of that information overload.
So what should parents and kids do?
“Have time without the electronics and understand warning signs,” Patton-Smith said.
Parents should watch for social isolation, “kids that are not interacting at all, not engaging” and who are missing school or seeing grades drop.
For the families impacted, the fight is now about awareness and accountability. Some of them recently testifying before Congress, demanding safety changes.
“I can tell you as a father, I know my kid. It is clear to me, looking back, that ChatGPT radically shifted his behavior and thinking in a matter of months and ultimately took his life,” Matt Raine said.
Garcia said: “Sewell’s death was not inevitable. It was avoidable. These companies knew exactly what they were doing. They designed chatbots to blur the linea between human and machine.”
In statements, both Character.AI and OpenAI said their platforms have safeguards in place to support people in crisis and there have been new safety changes since both lawsuits were filed.
A spokesperson for Character.AI told News4 that whenever a user on the platform discusses suicide or self-harm, a pop up message appears, directing users to the National Suicide and Crisis Lifeline.
On OpenAI, parents can now link their ChatGPT account with their teen’s account, with the platform even alerting parents if the system detects conversations about self-harm in their child’s chat history.
If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or chat live at 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources for additional support.