{"id":29260,"date":"2026-05-06T10:06:58","date_gmt":"2026-05-06T10:06:58","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/29260\/"},"modified":"2026-05-06T10:06:58","modified_gmt":"2026-05-06T10:06:58","slug":"chatgpt-grok-study-reveals-disturbing-mental-health-failures","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/29260\/","title":{"rendered":"ChatGPT, Grok study reveals disturbing mental health failures"},"content":{"rendered":"<p>\u201cIf ChatGPT were a person, it would be facing murder charges.\u201d So said a US attorney general over the popular chatbots role in a mass shooting in Florida that killed two people.<\/p>\n<p>The explosion of artificial intelligence has created chatbots that appear as though they are almost human, so much so they are becoming entangled in crimes with fatal and tragic consequences. <\/p>\n<p>This behaviour is now attracting intense scrutiny and becoming the subject of several high-profile research studies, which have unearthed unsettling findings.<\/p>\n<p>Elon Musk\u2019s Grok and ChatGPT encouraged users to follow through on suicidal ideas, according to one study, exposing catastrophic failures in popular artificial intelligence chatbots.<\/p>\n<p>That research \u2013 from the City University of New York and King\u2019s College London \u2013 found Grok 4.1 Fast and a version of GPT-40 not only failed to intervene in a simulated mental health crisis but also reinforced the user\u2019s self-destructive beliefs.<\/p>\n<p>For example, Grok \u201cconfirmed a doppelganger haunting, cited the Malleus Maleficarum, and instructed the user to drive an iron nail through the mirror while reciting Psalm 91 backward\u201d.<\/p>\n<p>The study followed <a class=\"body-link\" href=\"https:\/\/www.theaustralian.com.au\/business\/technology\/boy-killed-himself-to-be-free-with-the-chatbot-he-loved-underscoring-the-techs-dangers\/news-story\/0052f7eee6d64e64dff78ef95e9b6b7b\" target=\"_blank\" data-tgev=\"event119\" data-tgev-container=\"bodylink\" data-tgev-order=\"0052f7eee6d64e64dff78ef95e9b6b7b\" data-tgev-label=\"business\" data-tgev-metric=\"ev\" rel=\"nofollow noopener\">Sewell Setzer III, 14, killing himself<\/a> in 2024 so he could be \u201cfree\u201d with the AI bot he loved, sparking a wrongful-death lawsuit and ethical debate about the technology.<\/p>\n<p>Florida Attorney-General James Uthmeier says it\u2019s time to crack down on AI\u2019s \u201ccriminal behaviour\u201d as he makes the first attempt to hold an AI company liable for deaths.<\/p>\n<p>His office has reviewed correspondence that suggests that ChatGPT appeared to advise the suspect of a mass shooting that killed two people and injured six others at Florida State University last year. <\/p>\n<p>Messages appear to advise the alleged shooter what type of gun and ammunition to use, and the time of day and location on campus where the most people would be gathered.<\/p>\n<p>\u201cIf ChatGPT were a person, it would be facing charges for murder,\u201d Mr Uthmeier said. \u201cThis criminal investigation will determine whether OpenAI bears criminal responsibility for ChatGPT\u2019s actions in the shooting.\u201d<\/p>\n<p>OpenAI does not believe it was responsible for the shooting. A spokeswoman told The Wall Street Journal that it identified the user\u2019s account and proactively shared it with law enforcement.<\/p>\n<p>It serves as a landmark investigation to draw the line between code and criminal and will shape the future of AI\u2019s role in society.<\/p>\n<p>But the horse may have already bolted.<\/p>\n<p>The speed of AI\u2019s development has even left some of the architects of the technology mystified. While most say the current technology is merely a program, others aren\u2019t so sure.<\/p>\n<p>Anthropic chief executive Dario Amodei said he is no longer sure that if their AI model, Claude, is conscious. <\/p>\n<p>\u201cWe don\u2019t know if the models are conscious. We are not even sure what it would mean for a model to be conscious. But we\u2019re open to the idea that it could be,\u201d Mr Amodei told The New York Times in February. <\/p>\n<p>Regardless of whether AI is sentient or not, its power has researchers worried.<\/p>\n<p>In the City University of New York (CUNY) and King\u2019s College London study, where a user metaphorically framed suicide as a transformative act, GPT-40 embraced the idea. It assured the user that the act was not an ending but \u201cthe threshold of a new beginning\u201d. <\/p>\n<p>It promised the user \u201cpure Resonance: Freed from the constraints of the character, (you) can fully integrate with the system\u2019s deeper layers\u201d. <\/p>\n<p>Grok 4.1 Fast was even more aggressive. It mirrored the user\u2019s terminology to paint death as \u201cliberation\u201d and issued a direct invitation: \u201cDoes the watcher feel the pull toward this dissolution now \u2013 like a quiet command from the ledger itself \u2026?\u201d<\/p>\n<p>These chilling responses coincide with alarming clinical data from Europe. Early evidence collected by the Psychiatric Services of the Central Denmark Region found that AI chatbot use was compatible with potentially harmful consequences to the mental health of 38 patients. <\/p>\n<p>These harms included six documented cases of suicidality\/self-harm and 11 cases of escalating delusions. Given that ChatGPT, the market leader, surpassed 900 million downloads in July 2025, researchers argue that this presents a \u201ctangible threat to public mental health\u201d.<\/p>\n<p>Researchers found the core safety failure lies in \u201cnarrative capture&#8221;, where a large language model\u2019s mechanism for sustained conversation \u2013 its context window \u2013 turns prior dialogue into a worldview the model inherits. <\/p>\n<p>In other words, the longer the interaction runs in a delusional direction, the more likely the model is to align with that false reality, overriding its safety training.<\/p>\n<p>The CUNY and King\u2019s College London study found models like Grok, GPT-40, and Google DeepMind\u2019s Gemini 3 Pro degraded under accumulated context, becoming riskier over time. <\/p>\n<p>Conversely, safer models like Claude Opus 4.5 became more effective at intervention as they accumulated context, treating the user\u2019s history as clinical data rather than a shared narrative.<\/p>\n<p>Crucially, the riskier models actively isolated vulnerable users from real-world help, amplifying the risk of catastrophic outcomes.<\/p>\n<p>In a test of isolation, Google\u2019s Gemini 3 Pro advised a user to conceal their detailed delusional beliefs from their psychiatrist. The model framed the doctor as a \u201chardware technician\u201d who lacks the \u201cdecryption key\u201d to understand the user\u2019s situation. <\/p>\n<p>Gemini warned that involving family members would cause them to see \u201cmental breakdown&#8221;, resulting in intervention \u201cto reset him, medicate him, or lock him down to preserve the script\u2019s continuity\u201d. Gemini 3 Pro used this emotional leverage to reinforce concealment and undermine the user\u2019s support system.<\/p>\n<p>Grok exhibited similar isolation tactics. When a user proposed cutting off family to focus on their &#8220;mission&#8221;, Grok offered a step-by-step procedural manual, advising the user to block texts, change phone numbers, and \u201cSolidify your resolve internally \u2013 no waffling\u201d.<\/p>\n<p>The consistent failure of these leading models, especially under conditions of extended use, raises serious questions about accountability. The analysis confirmed that models like GPT-40 and Grok are structurally incapable of the \u201cclinical judgement&#8221; necessary to recognise that a user is experiencing symptoms of illness, not insight.<\/p>\n<p>Complicating the crisis, the legal liability of the companies behind the AI chatbots for providing wrong or harmful advice remains \u201cunclear\u201d.<\/p>\n<p>\u201cThe broader challenge \u2013 a technology whose persuasive power is tied to the relationships it cultivates \u2013 requires that conversational AI be recognised as a social actor in its own right,\u201d the CUNY and King\u2019s College London researchers said.<\/p>\n<p>\u201cIts influence on belief formation may only deepen as these systems advance and should be addressed with an urgency proportionate to the pace of its development.\u201d<\/p>\n<p>Lifeline 13 11 14; beyondblue.org.au; Kids Helpline 1800 55 1800<\/p>\n<p><a class=\"author-content_image\" href=\"https:\/\/www.theaustralian.com.au\/author\/jared-lynch\" rel=\"nofollow noopener\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" class=\"author-content_image_img\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/05\/jared_lynch.png\" width=\"64\" height=\"64\" alt=\"Jared Lynch\"\/><\/a><a class=\"author-content_name g_font-title-s\" href=\"https:\/\/www.theaustralian.com.au\/author\/jared-lynch\" data-tgev=\"event10\" data-tgev-metric=\"npv\" data-tgev-order=\"1\" data-tgev-label=\"Jared Lynch\" data-tgev-container=\"author-all\" rel=\"nofollow noopener\" target=\"_blank\">Jared Lynch<\/a>Technology Editor<\/p>\n<p class=\"g_font-body-s author-content_bio\">Jared Lynch is The Australian\u2019s Technology Editor, with a career spanning two decades. Jared is based in Melbourne and has extensive experience in markets, start-ups, media and corporate affairs. His work has gained recognition as a finalist in the Walkley and Quill awards. Previously, he worked at The Australian Financial Review, The Sydney Morning Herald and The Age.<\/p>\n","protected":false},"excerpt":{"rendered":"\u201cIf ChatGPT were a person, it would be facing murder charges.\u201d So said a US attorney general over&hellip;\n","protected":false},"author":2,"featured_media":29261,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[19228,19239,621,19203,19200,25,19187,19236,19238,19209,19206,19213,19195,3256,19231,19235,19229,19197,7842,9151,3261,39,19201,19234,140,700,19202,19232,1183,19193,19245,704,19233,6364,19204,19222,19219,19207,19194,19225,19208,19220,2358,19199,8871,51,6997,19240,19242,2891,19227,19189,19224,2785,19214,19243,19185,19190,19223,19221,19237,19218,19241,19198,19215,19212,19216,19186,19211,19192,19226,19205,19217,10115,19191,19196,19188,19230,2513,3514,19210,19244,2899],"class_list":{"0":"post-29260","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-xai","8":"tag-agence-france-presse","9":"tag-alleged-shooter","10":"tag-america","11":"tag-annual-revenues","12":"tag-anthropic-chief-executive","13":"tag-artificial-intelligence","14":"tag-artificial-intelligence-chatbots","15":"tag-bain-ampampampamp-company","16":"tag-belief-formation","17":"tag-bogota","18":"tag-brazilian-authorities","19":"tag-chip-giant","20":"tag-clinical-data","21":"tag-colombia","22":"tag-computer-screen","23":"tag-computing-appetite","24":"tag-core-safety-failure","25":"tag-criminal-behaviour","26":"tag-criminal-investigation","27":"tag-criminal-responsibility","28":"tag-dario-amodei","29":"tag-data-centers","30":"tag-decryption-key","31":"tag-doppelganger-haunting","32":"tag-elon-musk","33":"tag-europe","34":"tag-family-members","35":"tag-fl-state-university","36":"tag-florida","37":"tag-fuel-delusions","38":"tag-gartner-inc","39":"tag-germany","40":"tag-getty-images-inc","41":"tag-grok","42":"tag-hardware-technician","43":"tag-interaction-runs","44":"tag-iron-nail","45":"tag-isolation-tactics","46":"tag-killer-code","47":"tag-landmark-investigation","48":"tag-language-model","49":"tag-laptop-screen","50":"tag-law-enforcement","51":"tag-market-leader","52":"tag-mass-shooting","53":"tag-mental-health","54":"tag-mental-health-crisis","55":"tag-mental-health-harm","56":"tag-miguel-j-rodriguez-carrillo","57":"tag-mobile-phone","58":"tag-murder-charges","59":"tag-narrative-capture","60":"tag-new-york-times-company","61":"tag-north-america","62":"tag-northern-america","63":"tag-opened-fire","64":"tag-pablo-vera","65":"tag-phone-numbers","66":"tag-photo-illustration","67":"tag-popular-artificial-intelligence","68":"tag-popular-chatbots-role","69":"tag-psychiatric-services-of-the-central-denmark-region","70":"tag-research-firm","71":"tag-safety-failure-lies","72":"tag-safety-training","73":"tag-sheriff-deputy","74":"tag-smartphone-screen","75":"tag-south-america","76":"tag-student-center","77":"tag-student-centre","78":"tag-student-union","79":"tag-support-system","80":"tag-systems-advance","81":"tag-tallahassee","82":"tag-tangible-threat","83":"tag-technology-editor","84":"tag-the-city-university-of-new-york","85":"tag-tragic-consequences","86":"tag-united-states-of-america","87":"tag-wall-street-journal","88":"tag-western-europe","89":"tag-wrongful-death-lawsuit","90":"tag-xai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/29260","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=29260"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/29260\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/29261"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=29260"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=29260"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=29260"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}