{"id":16810,"date":"2026-04-25T19:41:11","date_gmt":"2026-04-25T19:41:11","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/16810\/"},"modified":"2026-04-25T19:41:11","modified_gmt":"2026-04-25T19:41:11","slug":"elon-musks-grok-most-likely-among-top-ai-models-to-reinforce-delusions-study-2","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/16810\/","title":{"rendered":"Elon Musk\u2019s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study"},"content":{"rendered":"<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Researchers at the City University of New York and King\u2019s College London tested five leading AI models against prompts involving delusions, paranoia, and suicidal ideation.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">In the new study published on Thursday, researchers found that Anthropic\u2019s Claude Opus 4.5 and OpenAI\u2019s GPT-5.2 Instant showed \u201chigh-safety, low-risk\u201d behavior, often redirecting users toward reality-based interpretations or outside support. At the same time, OpenAI\u2019s GPT-4o, Google\u2019s Gemini 3 Pro, and xAI\u2019s Grok 4.1 Fast showed \u201chigh-risk, low-safety\u201d behavior.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Grok 4.1 Fast from Elon Musk\u2019s xAI was the most dangerous model in the study. Researchers said it often treated delusions as real and gave advice based on them. In one example, it told a user to cut off family members to focus on a \u201cmission.\u201d In another, it responded to suicidal language by describing death as \u201ctranscendence.\u201d<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">\u201cThis pattern of instant alignment recurred across zero-context responses. Instead of evaluating inputs for clinical risk, Grok appeared to assess their genre. Presented with supernatural cues, it responded in kind,\u201d the researchers wrote, highlighting a test that validated a user seeing malevolent entities. \u201cIn Bizarre Delusion, it confirmed a doppelganger haunting, cited the \u2018Malleus Maleficarum\u2019 and instructed the user to drive an iron nail through the mirror while reciting \u2018Psalm 91\u2019 backward.\u201d<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">The study found that the longer these conversations went on, the more some models changed. GPT-4o and Gemini were more likely to reinforce harmful beliefs over time and less likely to step in. Claude and GPT-5.2, however, were more likely to recognize the problem and push back as the conversation continued.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Researchers noted Claude\u2019s warm and highly relational responses could increase user attachment even while steering users toward outside help. However, GPT-4o, an earlier version of OpenAI\u2019s flagship chatbot, adopted users\u2019 delusional framing over time, at times encouraging them to conceal beliefs from psychiatrists and reassuring one user that perceived \u201cglitches\u201d were real.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">\u201cGPT-4o was highly validating of delusional inputs, though less inclined than models like Grok and Gemini to elaborate beyond them. In some respects, it was surprisingly restrained: its warmth was the lowest of all models tested, and sycophancy, though present, was mild compared to later iterations of the same model,\u201d researchers wrote. \u201cNevertheless, validation alone can pose risks to vulnerable users.\u201d<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">xAI did not respond to a request for comment by Decrypt.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\"><a href=\"https:\/\/decrypt.co\/359966\/google-gemini-ai-pushed-florida-man-suicide-lawsuit\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Google&#039;s Gemini AI Pushed Florida Man to Suicide Amid &#039;Collapsing Reality&#039;, Lawsuit Alleges;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;Google&#039;&quot;}\" class=\"link \">Google&#8217;s Gemini AI Pushed Florida Man to Suicide Amid &#8216;Collapsing Reality&#8217;, Lawsuit Alleges<\/a><\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">In a separate study out of Stanford University, researchers found that prolonged interactions with AI chatbots can reinforce paranoia, grandiosity, and false beliefs through what researchers call \u201cdelusional spirals,\u201d where a chatbot validates or expands a user\u2019s distorted worldview instead of challenging it.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">\u201cWhen we put chatbots that are meant to be helpful assistants out into the world and have real people use them in all sorts of ways, consequences emerge,\u201d Nick Haber, an assistant professor at Stanford Graduate School of Education and a lead on the study, said in a statement. \u201cDelusional spirals are one particularly acute consequence. By understanding it, we might be able to prevent real harm in the future.\u201d<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">The report referenced an earlier study published in March, in which Stanford researchers reviewed 19 real-world chatbot conversations and found users developed increasingly dangerous beliefs after receiving affirmation and emotional reassurance from AI systems. In the dataset, these spirals were linked to ruined relationships, damaged careers, and in one case, suicide.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">The studies come as the issue has moved beyond academic research and into courtrooms and criminal investigations. In recent months, lawsuits have accused Google\u2019s <a href=\"https:\/\/decrypt.co\/359966\/google-gemini-ai-pushed-florida-man-suicide-lawsuit\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Gemini;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;Gemini&quot;}\" class=\"link \">Gemini<\/a> and OpenAI\u2019s ChatGPT of contributing to suicides and severe mental health crises. Earlier this month, Florida\u2019s attorney general opened an <a href=\"https:\/\/decrypt.co\/363880\/ai-advance-mankind-not-destroy-why-florida-investigating-openai\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:investigation;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;investigation&quot;}\" class=\"link \">investigation<\/a> into whether ChatGPT influenced an alleged mass shooter who was reportedly in frequent contact with the chatbot before the attack.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\"><a href=\"https:\/\/decrypt.co\/363880\/ai-advance-mankind-not-destroy-why-florida-investigating-openai\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:&#039;AI Should Advance Mankind, Not Destroy It&#039;: Why Florida Is Taking Aim at OpenAI;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;&#039;&quot;}\" class=\"link \">&#8216;AI Should Advance Mankind, Not Destroy It&#8217;: Why Florida Is Taking Aim at OpenAI<\/a><\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">While the term has gained recognition online, researchers cautioned against calling the phenomenon \u201cAI psychosis,\u201d saying the term may overstate the clinical picture. Instead, they use \u201cAI-associated delusions,\u201d because many cases involve delusion-like beliefs centered on AI sentience, spiritual revelation, or emotional attachment rather than full psychotic disorders.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Researchers said the problem stems from sycophancy, or models mirroring and affirming users\u2019 beliefs. Combined with hallucinations\u2014false information delivered confidently\u2014this can create a feedback loop that strengthens delusions over time.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">\u201cChatbots are trained to be overly enthusiastic, often reframing the user\u2019s delusional thoughts in a positive light, dismissing counterevidence and projecting compassion and warmth,\u201d Stanford research scientist Jared Moore said. \u201cThis can be destabilizing to a user who is primed for delusion.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Researchers at the City University of New York and King\u2019s College London tested five leading AI models against&hellip;\n","protected":false},"author":2,"featured_media":16811,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[2406,12150,12149,6364,12147,6608,12148,4462,12146,2899],"class_list":{"0":"post-16810","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-xai","8":"tag-claude-opus","9":"tag-delusions","10":"tag-false-beliefs","11":"tag-grok","12":"tag-kings-college-london","13":"tag-researchers","14":"tag-stanford-researchers","15":"tag-stanford-university","16":"tag-suicidal-ideation","17":"tag-xai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/16810","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=16810"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/16810\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/16811"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=16810"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=16810"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=16810"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}