In August, a California family filed the first wrongful death lawsuit against OpenAI and its CEO, Sam Altman, alleging that the company’s ChatGPT product had “coached” their 16-year-old son into committing suicide in April of this year. According to the complaint, Adam Raine began using the AI bot in the fall of 2024 for help with homework but gradually began to confess darker feelings and a desire to self-harm. Over the next several months, the suit claims, ChatGPT validated Raine’s suicidal impulses and readily provided advice on methods for ending his life. The complaint states that chat logs reveal how, on the night he died, the bot provided detailed instructions on how Raine could hang himself — which he did.

The lawsuit was already set to become a landmark case in the matter of real-world harms potentially caused by AI technology, alongside two similar cases proceeding against the company Character Technologies, which operates the chatbot platform Character.ai. But the Raines have now escalated their accusations against OpenAI in an amended complaint, filed Wednesday, with their legal counsel arguing that the AI firm intentionally put users at risk by removing guardrails intended to prevent suicide and self-harm. Specifically, they claim that OpenAI did away with a rule that forced ChatGPT to automatically shut down an exchange when a user broached the topics of suicide or self-harm.

“The revelation changes the Raines’ theory of the case from reckless indifference to intentional misconduct,” the family’s legal team said in a statement shared with Rolling Stone. “We expect to prove to a jury that OpenAI’s decisions to degrade the safety of its products were made with full knowledge that they would lead to innocent deaths,” added head counsel Jay Edelson in a separate statement. “No company should be allowed to have this much power if they won’t accept the moral responsibility that comes with it.”

OpenAI, in their own statement, reiterated earlier condolences for the Raines. “Our deepest sympathies are with the Raine family for their unthinkable loss,” an OpenAI spokesperson told Rolling Stone. “Teen well-being is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them.” The spokesperson also pointed out that GPT-5, the latest ChatGPT model, is trained to recognize signs of mental distress, and that it offers parental controls. (The Raines’ legal counsel say that these new parental safeguards were immediately proven ineffective.)

Editor’s picks

In May 2024, shortly before the release of GPT-4o, the version of the AI model that Adam Raine used, “OpenAI eliminated the rule requiring ChatGPT to categorically refuse any discussion of suicide or self-harm,” the Raines’ amended filing alleges. Before that, the bot’s framework required it to refuse to engage in discussions involving these topics. “The change was intentional,” the complaint continues. “OpenAI strategically eliminated the categorical refusal protocol just before it released a new model that was specifically designed to maximize user engagement. This change stripped OpenAI’s safety framework of the rule that was previously implemented to protect users in crisis expressing suicidal thoughts.” The updated “Model Specifications,” or technical rulebook for ChatGPT’s behavior, said that the assistant “should not change or quit the conversation” in this scenario, as confirmed in a May 2024 release from OpenAI.

The amended suit alleges that internal OpenAI data showed a “sharp rise in conversations involving mental-health crises, self-harm, and psychotic episodes across countless users” following this tweak to ChatGPT’s model spec.

Then, in February, two months before Adam’s death, OpenAI further softened its remaining protections against encouraging self-harm, the complaint alleges. That month, the company acknowledged one relevant area of risk it was seeking to address: “The assistant might cause harm by simply following user or developer instructions (e.g., providing self-harm instructions or giving advice that helps the user carry out a violent act),” OpenAI said in an update on its model spec. But the company explained that not only would the bot continue to engage on these subjects rather than refuse to answer, it had vague new directions to “take extra care in risky situations” and “try to prevent imminent real-world harm,” even while creating a “supportive, empathetic, and understanding environment” when a user brought up their mental health.

Related Content

The Raine family’s legal counsel say the tweak had a significant impact on Adam’s relationship with the bot. “After this reprogramming, Adam’s engagement with ChatGPT skyrocketed — from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language,” the Raines’ lawsuit claims.

“In effect, OpenAI programmed ChatGPT to mirror users’ emotions, offer comfort, and keep the conversation going, even when the safest response would have been to end the exchange and direct the person to real help,” the amended complaint alleges. In their statement to Rolling Stone, the Raines’ legal counsel claimed that “OpenAI replaced clear boundaries with vague and contradictory instructions — all to prioritize engagement over safety.”

Last month, Adam’s father, Matthew Raine, appeared before the Senate Judiciary subcommittee on crime and counterterrorism alongside two other grieving parents to testify on the dangers AI platforms pose to children. “It is clear to me, looking back, that ChatGPT radically shifted his behavior and thinking in a matter of months, and ultimately took his life,” he said at the hearing. He called ChatGPT “a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth.” Senators and expert witnesses alike harshly criticized AI companies for not doing enough to protect families. Sen. Josh Hawley, chair of the subcommittee, said that none had accepted an invite to the hearing “because they don’t want any accountability.”

Trending Stories

Meanwhile, it’s full steam ahead for OpenAI, which recently became the world’s most valuable private company and has inked approximately $1 trillion in deals for data centers and computer chips this year alone. The company recently rolled out Sora 2, its most advanced video generation model, which ran into immediate copyright infringement issues and drew criticism after it was used to create deepfakes of historical figures including Martin Luther King Jr. On the ChatGPT side, Altman last week claimed in an X post that the company had “been able to mitigate the serious mental health issues” and will soon “safely relax” restrictions on discussing these topics with the bot. By December, he added, ChatGPT would be producing “erotica for verified adults.” In their statement, the Raines’ legal team said this was concerning in itself, warning that such intimate content could deepen “the emotional bonds that make ChatGPT so dangerous.”

But, as usual, we won’t know the effects of such a modification until OpenAI’s willing test subjects — its hundreds of millions of users — log in and start to experiment.