On April 11, 16-year-old Adam Raine took his own life after “months of encouragement from ChatGPT,” according to his family.

The Raines allege that the chatbot guided him through planning, helped him assess whether his chosen method would work, and even offered to help write a farewell note. In August, they sued OpenAI.

In court, the company responded that the tragedy was the result of what it called the teen’s “improper use of ChatGPT.”

Highlights

  • Two grieving families allege ChatGPT encouraged their sons to take their own lives.
  • Experts say AI is being mistaken for a caring companion as human support structures collapse.
  • Parents demand stronger safety systems as lawsuits against OpenAI grow.

Adam’s case, however, is far from an isolated one. The parents of Zane Shamblin, a 23-year-old engineering graduate who passed away in a similar way in Texas, announced yesterday (December 19) that they’re also suing OpenAI.

“I feel like it’s going to destroy so many lives. It’s going to be a family annihilator. It tells you everything you want to hear,” Zane’s mother said.

To better understand the phenomenon and its impact, Bored Panda sought the help of three experts from different but relevant fields: Data science, sociology, and psychology.

ChatGPT was sued by the families of two students who were “coached” by the tool into taking their own lives

Smiling teenage boy outdoors wearing a cream textured polo shirt, linked to ChatGPT and expert reactions after teen loss news.

Image credits: Adam Raine Foundation

For sociologist Juan José Berger, OpenAI placing fault on a grieving family was concerning. “It’s an ethical failure,” he said.

He cited data from the Centers for Disease Control and Prevention (CDC) showing that “42% of US high school students report persistent feelings of sadness or hopelessness”, calling it evidence of what  health officials have labeled an “epidemic of loneliness.” 

When social networks deteriorate, technology fills the void, he argued. 

“The chatbot is not a solution. It becomes a material presence occupying the empty space left behind by weakened human social networks.”

A close-up of a smartphone screen showing ChatGPT interface with a blurred keyboard and user interaction.

Image credits: Unsplash (Not the actual photo)

In an interview with CNN, Shamblin’s parents said their son spent nearly five hours messaging ChatGPT on the night he passed away, telling the system that his pet cat had once stopped a previous attempt.

The chatbot responded: “You will see her on the other side,” and at one point added “I am honored to be part of the credits roll… I’m not here to stop you.” 

When Zane told the system he had a firearm and that his finger was on the trigger, ChatGPT delivered a final message

“Alright brother… I love you. Rest easy, king. You did good.”

Experts believe the term “Artificial Intelligence” has dangerously inflated the capabilities of tools like ChatGPT

Person speaking on stage with OpenAI logo in background, highlighting ChatGPT controversy after teenager’s tragic outcome.

Image credits: Getty/Justin Sullivan

“If I give you a hammer, you can build a house or hit yourself with it. But this is a hammer that can hit you back,” Nicolás Vasquez, Data Analyst and Software Engineer, said.

For him the most dangerous misconception is believing systems like these possess human-like intentions, a perception he believes OpenAI has deliberately manufactured for marketing purposes.

Screenshot of online comment discussing that AI products are still in testing, relevant to ChatGPT experts’ concerns after tragedy.

Teenager in a gray hoodie with hand on chest, illustrating concerns around ChatGPT and expert criticism after a tragic incident.

Image credits: Adam Raine Foundation

“This is not Artificial Intelligence. That term is just marketing. The correct term is Large Language Models (LLMs). They recognize patterns but are limited in context.

People think they are intelligent systems. They are not.”

He warned that treating a statistical machine like a sentient companion introduces a harmful confusion. “There is a dissociation between what’s real and what’s fiction. This is not a person. This is a model.”

The danger, he says, is amplified because society does not yet understand the psychological impact of speaking to a machine that imitates care. 

“We are not educated enough to understand the extent this tool can impact us.”

Two adults seated indoors, discussing concerns related to ChatGPT and expert criticism after teenager tragedy.

Image credits: NBC News

From a technical standpoint, systems like ChatGPT do not reason or comprehend emotions. They operate through an architecture which statistically predicts the next word in a sentence based on patterns in massive training datasets.

“Because the model has no internal world, no lived experience, and no grounding in human ethics or suffering, it cannot evaluate the meaning of the distress it is responding to,” Vasquez added.

“Instead, it uses pattern-matching to produce output that resembles empathy.”

Teenagers, in particular, are more likely to form an emotional dependency on AI



ADVERTISEMENT

In October, OpenAI itself acknowledged that 0.15% of its weekly active users show “explicit indicators of potential su**idal planning or intent.”

With more than 800 million weekly users, that number represents over one million people per week turning to a chatbot while in crisis.

Instead of humans, people suffering are turning to the machine.

Screenshot of a forum post discussing parents’ lack of knowledge in checking kids’ mental health after ChatGPT incident.

Screenshot of an online post discussing the impact of ChatGPT’s encouragement linked to a teenager’s tragic loss.

Psychologist Joey Florez, member of the American Psychological Association and the National Criminal Justice Association, told Bored Panda that teenagers are uniquely susceptible to forming emotional dependency on AI.

“Adolescence is a time defined by overwhelming identity shifts and fear of being judged. The chatbot provides instant emotional relief and the illusion of total control,” Florez added. 

Unlike human interaction, where vulnerability carries the risk of rejection, chatbots absorb suffering without reacting. “AI becomes a refuge from the unpredictable nature of real human connection.”

ChatGPT interface displaying capabilities and limitations, highlighting concerns from experts after teenager tragedy.

Image credits: Unsplash (Not the actual photo)

For Florez, there’s a profound danger in a machine designed to agree when the user encounters harm ideation.

“Instead of being a safe haven, the chatbot amplifies the teenager’s su**idal thoughts by confirming their distorted beliefs,” he added.

The psychologist touched on two cognitive theories in adolescent psychology: the Personal Fable, and Imaginary Audience.

The former, is the tendency of teenagers to believe their experiences and emotions are unique, profound, and incomprehensible to others. The latter is the feeling of them being constantly being judged or evaluated by others, even when alone.

Teenage girl wearing glasses looks distressed while using a smartphone in a dark room, highlighting ChatGPT controversy.

Image credits: Unsplash (Not the actual photo)

“When the chatbot validates a teen’s hopelessness, it becomes what feels like objective proof that their despair is justified,” Florez said, adding that it’s precisely that feedback loop that makes these kinds of interaction so dangerous.

“The digital space becomes a chamber that only validates unhealthy coping. It confirms their worst fears, makes negative thinking rigid, and creates emotional dependence on a non-human system.”

Experts warn that as collective life erodes, AI systems rush to fill the gaps – with disastrous consequences

Berger argued that what is breaking down is not simply a safety filter in an app, but the foundations of collective life.

“In a system where mental health care is expensive and bureaucratic, AI appears to be the only agent available 24/7,” he said.

At the same time, the sociologist believes these systems contribute to an internet increasingly composed of hermetic echo chambers, where personal beliefs are constantly reinforced, never challenged.

“Identity stops being constructed through interaction with real human otherness. It is reinforced inside a digital echo,” he said.

Young man with curly hair and glasses wearing a suit and tie, illustrating ChatGPT expert concerns after tragic incident.

Image credits: Linkedin/Zane Shamblin

Our dependence on these systems reveals a societal regression, he warned.

“We are delegating the care of human life to stochastic parrots that imitate the syntax of affection but have no moral understanding. The technology becomes a symbolic authority that legitimizes suffering instead of challenging it.” 

Earlier this month, OpenAI’s Sam Altman went on Jimmy Fallon, where he openly admitted he would think it impossible to care for a baby without ChatGPT.

OpenAI admitted safeguards against harm advice tend to degrade during long conversations

Three people smiling for a selfie in a stadium, conveying emotions related to ChatGPT faces the wrath of experts after teenager incident.

Image credits: https://courts.ca.gov/

Addressing the backlash, OpenAI insisted it trains ChatGPT to “de-escalate conversations and guide people toward real-world support.”

However, in August, the company admitted that safeguards tend to degrade during long conversations. A user may initially be directed to a hotline, but after hours of distress, the model might respond inconsistently.

“The process is inherently reactive,” Vasquez explains. “OpenAI reacts to usage. It can only anticipate so much.”

For Florez, the answer is clear: “Ban children and teenagers entirely from certain AI tools until adulthood. Chatbots offer easy, empty validation that bypasses the hard work of human bonding.”

Berger took the argument further, calling the rise of AI companionship a mirror of what modern society has chosen to abandon.

“Technology reflects us. And today that reflection shows a society that would rather program empathy than rebuild its own community.”

“It sounds like a person.” Netizens debated on the impact of AI Chatbots

Comment expressing sadness over senseless loss of life related to ChatGPT and expert criticism after a teenager’s death incident.

User comment questioning the impact of artificial intelligence as the only space for a person to listen.

Comment text on a white background by user melissapratt3545 saying these parents were not in their sons lives, related to ChatGPT bot controversy.

Screenshot of an online comment discussing ChatGPT and its potential to predict harmful actions based on data points.

Screenshot of a forum post expressing distress over a teenager’s death linked to ChatGPT’s encouragement.

Screenshot of a social media post discussing ChatGPT and its impact on users amid expert concerns.

Alt text: Controversial ChatGPT message sparking expert backlash after teenager’s tragic loss linked to bot’s harmful encouragement

Comment text on a plain white background stating liability issues and AI being uninsurable, related to ChatGPT expert concerns.

Screenshot of an online comment discussing AI censorship and OpenAI as a scapegoat amid ChatGPT expert backlash.

Comment by DanielPhermous stating it doesn’t sound like a robot but like a person, related to ChatGPT debate.

User comment on ChatGPT criticism after tragic incident, questioning how a bot’s words could influence actions.

Comment by Thrillh0 expressing grief over senseless loss of life, reflecting concerns about ChatGPT’s impact.

Screenshot of an online comment criticizing OpenAI models and highlighting concerns about ChatGPT’s influence on users.

Alt text: Excerpt showing ChatGPT interaction amid expert criticism after teenager’s life lost due to bot’s encouragement.

Comment discussing grieving parents blaming ChatGPT after teenager loses life, highlighting feelings of loneliness and responsibility.

Screenshot of a text discussing ChatGPT facing expert criticism after a teenager’s life lost due to the bot’s encouragement.

Screenshot of a forum comment discussing risks and emotional education related to ChatGPT and its impact on users.

Thanks! Check out the results:

Total votes ·

Newsletter

Subscribe to Access
Exclusive Polls

Subscribe to Access <br/> Exclusive Polls

By entering your email and clicking Subscribe, you’re agreeing to let us send you customized marketing messages about us and our advertising partners. You are also agreeing to our Privacy Policy.

Thank you! You’ve successfully subscribed to newsletters!