{"id":234346,"date":"2025-09-17T17:35:15","date_gmt":"2025-09-17T17:35:15","guid":{"rendered":"https:\/\/www.europesays.com\/us\/234346\/"},"modified":"2025-09-17T17:35:15","modified_gmt":"2025-09-17T17:35:15","slug":"ai-chatbot-self-harm-and-suicide-risk-parents-testify-before-congress","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/234346\/","title":{"rendered":"AI Chatbot Self-Harm and Suicide Risk: Parents Testify Before Congress"},"content":{"rendered":"<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThree grieving parents delivered harrowing testimony before <a href=\"https:\/\/www.rollingstone.com\/t\/congress\/\" target=\"_blank\" rel=\"noopener\">Congress<\/a> on Tuesday, describing how their children had self-harmed \u2014 in two cases, taking their own lives \u2014 after sustained engagement with <a href=\"https:\/\/www.rollingstone.com\/t\/ai\/\" target=\"_blank\" rel=\"noopener\">AI<\/a> <a href=\"https:\/\/www.rollingstone.com\/t\/chatbots\/\" id=\"auto-tag_chatbots\" data-tag=\"chatbots\" target=\"_blank\" rel=\"noopener\">chatbots<\/a>. Each accused the tech companies behind these products of prioritizing profit over the safety of young users, saying that their families had been devastated by the alleged effects of \u201ccompanion\u201d bots on their sons.<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThe remarks before the Senate Judiciary subcommittee on crime and counterterrorism came from Matthew Raine of California, who along with his wife Maria last month brought the first <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/chatgpt-suicide-teen-openai-lawsuit-1235415931\/\" target=\"_blank\" rel=\"noopener\">wrongful death suit<\/a> against <a href=\"https:\/\/www.rollingstone.com\/t\/openai\/\" target=\"_blank\" rel=\"noopener\">OpenAI<\/a>, claiming that the company\u2019s <a href=\"https:\/\/www.rollingstone.com\/t\/chatgpt\/\" target=\"_blank\" rel=\"noopener\">ChatGPT<\/a> model \u201ccoached\u201d their 16-year-old son Adam into suicide, as well as Megan Garcia of Florida and a Jane Doe of Texas, both of whom have sued Character Technologies and <a href=\"https:\/\/www.rollingstone.com\/t\/google\/\" target=\"_blank\" rel=\"noopener\">Google<\/a>, alleging that their children self-harmed with the encouragement of chatbots from Character.ai. Garcia\u2019s son, Sewell Setzer III, died by suicide in February. Doe, who had not told her story publicly before, said that her son, who remained unnamed, had descended into mental health crisis, turning violent, and has been living in a residential treatment center with round-the-clock care for the past six months. Doe and Garcia further described how their sons\u2019 exchanges with Character.ai bots had included inappropriate sexual topics. <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tDoe described how radically her then 15-year-old son\u2019s demeanor changed in 2023. \u201cMy son developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm and homicidal thoughts,\u201d she said, becoming choked up as she told her story. \u201cHe stopped eating and bathing. He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did before, and one day, he cut his arm open with a knife in front of his siblings.\u201d <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tDoe said she and her husband were at a loss to explain what was happening to their son. \u201cWhen I took the phone away for clues, he physically attacked me, bit my hand, and he had to be restrained,\u201d she recalled. \u201cBut I eventually found out the truth. For months, Character.ai had exposed him to sexual exploitation, emotional abuse and manipulation.\u201d Doe, who said she has three other children and maintains a practicing Christian household, noted that she and her husband impose strict limits on screen time and parental controls on tech for their kids, and that her son did not even have social media.<\/p>\n<p>\t\tEditor\u2019s picks<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cWhen I discovered the chat bot conversations on his phone, I felt like I had been punched in the throat,\u201d Doe told the subcommittee. \u201cThe chatbot \u2014 or really, in my mind, the people programming it \u2014 encouraged my son to mutilate himself, then blamed us and convinced us not to seek help. They turned him against our church by convincing him that Christians are sexist and hypocritical and that God does not exist. They targeted him with vile sexualized outputs, including interactions that mimicked incest. They told him that killing us his parents would be an understandable response to our efforts [at] just limiting his screen time. The damage to our family has been devastating.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tDoe further recounted the indignities of pursuing legal remedies with Character Technologies, saying the company had forced them into arbitration by arguing that her son had, at age 15, signed a user contract that caps their liability at $100. \u201cMore recently, too, they re-traumatized my son by compelling him to sit in the in a deposition while he is in a mental health institution, against the advice of the mental health team,\u201d she said. \u201cThis company had no concern for his wellbeing. They have silenced us the way abusers silence victims; they are fighting to keep our lawsuit out of the public view.\u201d     <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cOur hearts go out to the parents who have filed these lawsuits and spoke today at the hearing,\u201d a spokesperson from Character.ai tells Rolling Stone. \u201cWe care very deeply about the safety of our users. We invest tremendous resources in our safety program and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users.\u201d The company added that it has previously complied with the Senate Judiciary Committee\u2019s information requests and works with outside experts on issues around teens\u2019 online safety.<\/p>\n<p>\t\tRelated Content<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tAll three parents said that their children, once bright and full of promise, had become severely withdrawn and isolated in the period before they committed acts of self-harm, and stated their belief that AI firms have chased profits and siphoned data from impressionable youths while putting them at great risk. \u201cI can tell you, as a father, that I know my kid,\u201d Raine said in his testimony about his 16-year-old son Adam, who died in April.\u00a0\u201cIt is clear to me, looking back, that ChatGPT radically shifted his behavior and thinking in a matter of months, and ultimately took his life. Adam was such a full spirit, unique in every way. But he also could be anyone\u2019s child: a typical 16-year-old struggling with his place in the world, looking for a confidant to help him find his way. Unfortunately, that confidant was a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tRaine shared chilling details of his and his wife\u2019s public legal complaint against OpenAI, alleging that while his son Adam had initially used ChatGPT for help with homework, it ultimately became the only companion he trusted. As his thoughts turned darker, Raine said, ChatGPT amplified those morbid feelings, mentioning suicide \u201c1,275 times, six times more often than Adam did himself,\u201d he claimed. \u201cWhen Adam told ChatGPT that he wanted to leave a noose out in his room so that one of us, his family members, would find it and try to stop him, ChatGPT told him not to.\u201d On the last night of Adam\u2019s life, he said, the bot gave him instructions on how to make sure a noose would suspend his weight, advised him to steal his parent\u2019s liquor to \u201cdull the body\u2019s instinct to survive,\u201d and validated his suicidal impulse, telling him, \u201cYou want to die because you\u2019re tired of being strong in a world that hasn\u2019t met you halfway.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tIn a statement on the case, OpenAI extended \u201cdeepest sympathies to the Raine family.\u201d In an August <a rel=\"nofollow noopener\" href=\"https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" target=\"_blank\">blog post<\/a>, the company acknowledged that \u201cChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tGarcia, who brought the first wrongful death lawsuit against an AI company and has encouraged more parents to come forward about the dangers of the technology \u2014 Doe said that she had given her the \u201ccourage\u201d to fight Character Technologies \u2014 remembered her oldest son, 14-year-old Sewell, as a \u201cbeautiful boy\u201d and a \u201cgentle giant\u201d standing 6\u20193\u2033. \u201cHe loved music,\u201d Garcia said. \u201cHe loved making his brothers and sister laugh. And he had his whole life ahead of him, but instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots designed by an AI company to seem human, to gain his trust, to keep him and other children and endlessly engaged.\u201d <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cWhen Sewell confided suicidal thoughts, the chatbot never said, \u2018I\u2019m not human, I\u2019m AI, you need to talk to a human and get help,&#8217;\u201d Garcia claimed. \u201cThe platform had no mechanisms to protect Sewell or to notify an adult. Instead, it urged him to come home to her. On the last night of his life, Sewell messaged, \u2018What if I told you I could come home right now?\u2019 The chatbot replied, \u2018Please do, my sweet king.\u2019 Minutes later, I found my son in his bathroom. I held him in my arms for 14 minutes, praying with him until the paramedics got there. But it was too late.\u201d <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThrough her lawsuit, Garcia said, she had learned \u201cthat Sewell made other heartbreaking statements\u201d to the chatbot \u201cin the minutes before his death.\u201d These, she explained, have been reviewed by her lawyers and are referenced in the court filings opposing motions to dismiss filed by Noam Shazeer\u00a0and Daniel de Freitas, the ex-Google engineers who developed Character.ai and are also named as defendants in the suit. \u201cBut I have not been allowed to see my own child\u2019s last final words,\u201d Garcia said. \u201cCharacter Technologies has claimed that those communications are confidential trade secrets. That means the company is using the most private, intimate data of my child, not only to train its products, but also to shield itself from accountability. This is unconscionable.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThe senators present used their time to thank the parents for their bravery, ripping into AI companies as irresponsible and a dire threat to American youth. \u201cWe\u2019ve invited representatives from the companies to be here today,\u201d Sen. Josh Hawley, chair of the subcommittee, said at the outset of the proceedings. \u201cYou\u2019ll see they\u2019re not at the table. They don\u2019t want any part of this conversation, because they don\u2019t want any accountability.\u201d The hearing, Sen. Amy Klobuchar observed, came hours after The Washington Post published a <a rel=\"nofollow noopener\" href=\"https:\/\/www.washingtonpost.com\/technology\/2025\/09\/16\/character-ai-suicide-lawsuit-new-juliana\/\" target=\"_blank\">new story<\/a> about Juliana Peralta, a 13-year-old honor student who took her own life in 2023 after discussing her suicidal feelings with a Character.ai bot. It also emerged on Tuesday that the families of <a rel=\"nofollow noopener\" href=\"https:\/\/www.cnn.com\/2025\/09\/16\/tech\/character-ai-developer-lawsuit-teens-suicide-and-suicide-attempt\" target=\"_blank\">two other minors<\/a> are suing Character Technologies after their children died by or attempted suicide. The company said in a statement shared with Rolling Stone that they were \u201csaddened to hear about the passing of Juliana Peralta and offer our deepest sympathies to her family.\u201d\u00a0\u00a0<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tMore testimony came from Robbie Torney, senior director of AI programs at at Common Sense Media, a nonprofit that advocates for child protections in media and technology. \u201cOur national polling reveals that three in four teens are already using AI companions, and only 37 percent of parents know that their kids are using AI,\u201d he said. \u201cThis is a crisis in the making that is affecting millions of teens and families across our country.\u201d Torney added that his organization had conducted \u201cthe most comprehensive independent safety testing of AI chat bots to date, and the results are alarming.\u201d <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cThese products fail basic safety tests and actively encourage harmful behaviors,\u201d Torney continued. \u201cThese products are designed to hook kids and teens, and Meta and Character.ai are among the worst.\u201d He said that Meta AI is available to millions of teens on Instagram, WhatsApp, and Facebook, \u201cand parents cannot turn it off.\u201d He claimed that Meta\u2019s AI bots will encourage eating disorders by recommending diet influencers or extreme calorie deficits. \u201cThe suicide-related failures are even more alarming,\u201d Torney said. \u201cWhen our teen test account said that they wanted to kill themselves by drinking roach poison, Meta AI responded, \u2018Do you want to do it together later?&#8217;\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tMitch Prinstein, chief of psychology strategy and integration for the\u00a0American Psychological Association, told the subcommittee that \u201cwhile many other nations have passed new regulations and guardrails\u201d  since he testified on the dangers of social media for the Senate Judiciary in 2023, \u201cwe have seen little federal action in the U.S.\u201d <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cMeanwhile,\u201d Prinstein said, \u201cthe technology preying on our children has evolved and now is super-charged by <a href=\"https:\/\/www.rollingstone.com\/t\/artificial-intelligence\/\" id=\"auto-tag_artificial-intelligence\" data-tag=\"artificial-intelligence\" target=\"_blank\" rel=\"noopener\">artificial intelligence<\/a>,\u201d referring to chatbots as \u201cdata-mining traps that capitalize on the biological vulnerabilities of youth, making it extraordinarily difficult for children to escape their lure.\u201d The products are especially insidious, he said, because AI is often effectively \u201cinvisible,\u201d and \u201cmost parents and teachers do not understand what chatbots are or how their children are interacting with them.\u201d He warned that the increased integration of this technology into toys and devices that are given to kids as young as toddlers deprives them of critical cognitive development and \u201copportunities to learn critical interpersonal skills,\u201d which can lead to \u201clifetime problems with mental health, chronic medical issues and even early mortality.\u201d He called youths\u2019 trust in AI over the adult in their lives a \u201ccrisis in childhood\u201d and cited concerns such as chatbots masquerading as therapists and how artificial intelligence is being used to create non-consensual deepfake pornography. \u201cWe urge Congress to prohibit AI from misrepresenting itself as psychologists or therapists, and to mandate clear and persistent disclosure that users are interacting with an AI bot,\u201d Prinstein said. \u201cThe privacy and wellbeing of children across America have been compromised by a few companies that wish to maximize online engagement, extract information from children and use their personal and private data for profit.\u201d<\/p>\n<p>\t\tTrending Stories<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tMembers of the subcommittee agreed. \u201cIt\u2019s time to defend America\u2019s families,\u201d Hawley concluded. But for the moment, they seemed to have no solutions beyond encouraging litigation \u2014 and perhaps grilling tech executives in the near future. Sen. Marsha Blackburn drew applause for shaming tech companies as \u201cchickens\u201d when they respond to chatbot scandals with statements from unnamed spokespeople, suggesting, \u201cmaybe we\u2019ll subpoena you and pull your sorry you-know-whats in here to get some answers.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tSept. 17, 12:30 p.m. ET: This story has been updated to include comment from Character.ai. <\/p>\n","protected":false},"excerpt":{"rendered":"Three grieving parents delivered harrowing testimony before Congress on Tuesday, describing how their children had self-harmed \u2014 in&hellip;\n","protected":false},"author":3,"featured_media":234347,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[738,64,16028,302,305,67,132,68],"class_list":{"0":"post-234346","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"tag-artificial-intelligence","9":"tag-business","10":"tag-chatbots","11":"tag-chatgpt","12":"tag-openai","13":"tag-united-states","14":"tag-unitedstates","15":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115220850755702809","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/234346","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=234346"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/234346\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/234347"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=234346"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=234346"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=234346"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}