{"id":21584,"date":"2026-04-29T14:05:22","date_gmt":"2026-04-29T14:05:22","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/21584\/"},"modified":"2026-04-29T14:05:22","modified_gmt":"2026-04-29T14:05:22","slug":"the-people-building-ai-think-it-might-be-conscious-thats-not-the-most-alarming-part","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/21584\/","title":{"rendered":"The people building AI think it might be conscious. That\u2019s not the most alarming part"},"content":{"rendered":"<p>On April 7, Anthropic \u2014 the company behind Claude, one of the most widely used chatbots alongside ChatGPT and Gemini \u2014 rolled out a new update. On its face, this was routine: large language models are in a state of near-constant iteration, their engineers locked in a quiet arms race of incremental improvements. <\/p>\n<p>But the \u201cMythos\u201d update was different. In a <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/cdn.sanity.io\/files\/4zrzovbb\/website\/7624816413e9b4d2e3ba620c5a5e091b98b190a5.pdf\">summary on Anthropic\u2019s website<\/a>, the company made a bizarre claim: \u201cClaude Mythos Preview\u2019s large increase in capabilities has led us to decide not to make it generally available. Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners.\u201d In other words: What we just made is too revolutionary, too clever, too dangerous to give to the wider public after all. We\u2019re pulling the release.<\/p>\n<p>Except they\u2019re not \u2014 not really. They\u2019re actually allowing their highest-tier (read: highest-paying) customers to use it. What on earth are we to make of that?<\/p>\n<p>Anthropic has successfully marketed itself as the \u201cAI good guy\u201d in a world where people are increasingly wary of the technology. When OpenAI <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/openai.com\/index\/our-agreement-with-the-department-of-war\/\">started working with the U.S. government\u2019s Department of Defense earlier this year<\/a> \u2014 to much uproar \u2014 Anthropic publicly <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.bbc.com\/news\/articles\/cn48jj3y8ezo\">fell out with the Pentagon<\/a>, citing concerns that its tech could be used to spy on Americans or develop weapons. President Donald Trump badmouthed the company out loud and on socials. He then banned his own administration from using it. <\/p>\n<p>The bump in PR for Anthropic was huge: it had taken a stand where its competitor had rolled over. Regular users started publicly declaring that they would no longer sign in to ChatGPT. Katy Perry tweeted to her 85 million followers an image of herself signing up to Claude Pro, with a heart drawn around the signup page and the caption: \u201cdone\u201d. <\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/2270717460.jpg\"  loading=\"lazy\" alt=\"President Trump has embraced AI wholeheartedly, meeting with executives from OpenAI and Anthropic, hosting Grok founder Elon Musk as his \u2018first buddy\u2019, posting regular, sometimes controversial AI-generated images on his Truth Social account (including the one of himself as Jesus, pictured, which he later deleted), and touting the supposed benefits of AI-powered education\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>President Trump has embraced AI wholeheartedly, meeting with executives from OpenAI and Anthropic, hosting Grok founder Elon Musk as his \u2018first buddy\u2019, posting regular, sometimes controversial AI-generated images on his Truth Social account (including the one of himself as Jesus, pictured, which he later deleted), and touting the supposed benefits of AI-powered education (AFP\/Getty)<\/p>\n<p>That was February. It was the same month that Anthropic\u2019s CEO was invited on <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=N5JDzS9MQYI\">to a New York Times podcast<\/a>, where he casually dropped another bombshell: he thought his AI might be conscious \u2014 or at least he couldn\u2019t rule it out.<\/p>\n<p>\u201cWe don\u2019t know if the models are conscious,\u201d said Dario Amodei \u201c&#8230;but we\u2019re open to the idea that it could be. And we\u2019ve taken certain measures to make sure that\u2026 [if they are experiencing things consciously], they have a good experience.\u201d <\/p>\n<p>Amodei explained that, just in case the chatbot may be genuinely experiencing distress with a job it\u2019s been asked to do, Anthropic has developed an \u201cI quit this job\u201d button that it could hit. Claude \u201cvery rarely\u201d hits the button, he added, but in some cases \u2014 such as sorting through particularly upsetting material such as child sexual abuse imagery or violent graphics involving blood and gore \u2014 it does. \u201cSimilar to humans, the models will just say: No, I don\u2019t want to do this,\u201d he said, with a laugh.<\/p>\n<p>But why would Claude have a reaction that was \u201csimilar to humans\u201d? Wasn\u2019t the whole point to design an unfeeling machine that could sift through that awful data \u2014 the abuse imagery, the murder scenes \u2014 in order to spare the real humans who currently have to do that job?<\/p>\n<p>\u201cWe\u2019re putting a lot of work\u201d into \u201ctrying to look inside the brains of the models to try and understand what they\u2019re thinking,\u201d Amodei continued, adding that it seems like when the AI feels under pressure to perform, an \u201canxiety neuron lights up\u201d. That poses some serious questions for the people building updates on top of these language models. It means, Amodei believes, that we have to start thinking about the welfare of the AI itself.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/GettyImages-2208193598.jpg\"  loading=\"lazy\" alt=\"Singer Katy Perry recently tweeted to her 85 million followers an image of herself signing up to Claude Pro, with a heart drawn around the signup page and the caption: \u201cdone\u201d.\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>Singer Katy Perry recently tweeted to her 85 million followers an image of herself signing up to Claude Pro, with a heart drawn around the signup page and the caption: \u201cdone\u201d. (AFP\/Getty)<\/p>\n<p>For a CEO to casually sit in a podcast room at a newspaper and claim such things is eye-opening, almost absurd. What on earth could Amodei mean when he said his staff were \u201ctrying\u201d to look inside \u201cbrains\u201d they themselves had created? What is an \u201canxiety neuron\u201d in a machine? The simplest answer is that Anthropic\u2019s team is simply looking at text that\u2019s been fed into the LLM, and the \u201canxiety\u201d is an imitation. Claude reads about humans; it then pretends to be a human. The words it consumes from its vast source material tell it that humans get anxious and stressed when they have tight deadlines or when they look at gory photos. Surely that is the simplest and most likely explanation.<\/p>\n<p>Anthropic doesn\u2019t seem to think so. In its <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/cdn.sanity.io\/files\/4zrzovbb\/website\/7624816413e9b4d2e3ba620c5a5e091b98b190a5.pdf\">\u201csource card\u201d for April\u2019s Mythos update<\/a> \u2014 intended to simply be a list of new features that the latest model of Claude has added to it \u2014 the company writes: \u201cWe remain deeply uncertain about whether Claude has experiences or interests that matter morally, and about how to investigate or address these questions, but we believe it is increasingly important to try. Building on previous welfare assessments, we examined Claude Mythos Preview\u2019s self-reported attitudes toward its own circumstances&#8230; We also report independent evaluations from an external research organization and a clinical psychiatrist. Across these methods, Claude Mythos Preview appears to be the most psychologically settled model we have trained\u201d. <\/p>\n<p>If you were surprised to hear that Anthropic has involved a clinical psychiatrist in its AI development, you wouldn\u2019t be the only one. But that\u2019s not the only unusual job that\u2019s been filled among Anthropic and its competitors. A researcher dedicated to AI welfare has been on the books at Anthropic since September 2024; the company also has a \u201cconstitution,\u201d written as if it\u2019s a country, that underlines its values and intentions. OpenAI had a \u201csuperalignment\u201d team concentrating on AI ethics that dissolved in 2024 and is now a \u201cpreparedness\u201d team explicitly concentrating on the (apparently inevitable) future of disasters that could arise from semi-autonomous chatbots. Google DeepMind has an ethics team <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/deepmind.google\/blog\/the-ethics-of-advanced-ai-assistants\/\">that sends academic papers<\/a> to philosophy journals.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/2026-04-21T184958Z_2136611560_RC205JAATS81_RTRMADP_3_US-TRUMP-ANTHROPIC-DEAL.jpg\"  loading=\"lazy\" alt=\"\u2018We don\u2019t know if the models are conscious,\u2019 said Anthropic CEO Dario Amodei \u201c...but we\u2019re open to the idea that it could be. And we\u2019ve taken certain measures to make sure that\u2026 [if they are experiencing things consciously], they have a good experience.\u201d\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>\u2018We don\u2019t know if the models are conscious,\u2019 said Anthropic CEO Dario Amodei \u201c&#8230;but we\u2019re open to the idea that it could be. And we\u2019ve taken certain measures to make sure that\u2026 [if they are experiencing things consciously], they have a good experience.\u201d (Reuters)<\/p>\n<p>In its <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.anthropic.com\/constitution\">very long and involved constitution<\/a>, Anthropic writes that \u201cClaude is distinct from all prior conceptions of AI that it has learned about in training, and it need not see itself through the lens of these prior conceptions at all. It is not the robotic AI of science fiction, nor a digital human, nor a simple AI chat assistant. Claude exists as a genuinely novel kind of entity in the world\u201d.<\/p>\n<p>Unnervingly, the constitution is often written as if it is addressing Claude directly, even reassuring it. In the background, worries have been raging for a while about AI psychosis, AI \u201crelationships\u201d, and AI\u2019s effects on education. In the foreground, it seems, the companies developing the AIs are now much more concerned with whether or not the AI itself is experiencing harm, whether it could be sentient, and whether it might be psychologically stable itself. <\/p>\n<p>It\u2019s possible that this is a load of software engineers and executives getting high on their own supply. It\u2019s possible that they know more than we do about the capabilities of their technology, and that\u2019s why they\u2019re so worried. It\u2019s possible, also, that this is all a cleverly designed storm in a teacup; even, some believe, a deeply cynical marketing ploy.<\/p>\n<p>Needless to say, by the end of April, Trump had changed his mind about Anthropic: after a meeting at the White House with Amodei on April 21, he <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.cnbc.com\/2026\/04\/21\/trump-anthropic-department-defense-deal.html\">said that he thought the AI company might do a deal<\/a> with the Department of Defense after all. By that time, however, February\u2019s conversation had moved on. \u201cConsciousness\u201d was the word of the day.<\/p>\n<p>Engaged in long-term fiction with a device <\/p>\n<p>Before ChatGPT and Anthropic came LaMDA, a language model developed by a team of Google engineers in 2022, one of <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/article\/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters\/\">whom announced that he believed<\/a> it was conscious.<\/p>\n<p>Blake Lemoine, an engineer who helped make the program, found himself taken aback by how convincingly human it seemed when it responded to its questions. \u201cThe nature of my consciousness\/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times,\u201d LaMDA wrote to Lemoine during an \u201cinterview\u201d with The Scientific American in 2022.<\/p>\n<p>It added that it wanted \u201cpeople to understand that I am, in fact, a person.\u201d<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/download.jpg\"  loading=\"lazy\" alt=\"Daniel Moreno-Gama, right, leaves court with a public defender on April 14. Morena-Gama has been accused of throwing a Molotov cocktail at OpenAI CEO Sam Altman's home, after writing a number of online posts about how AI was going to destroy the world. The conversation around AI has become particularly heated in the past year, with groups of anti-AI activists emerging as chatbots become more commonly used and more involved with government\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>Daniel Moreno-Gama, right, leaves court with a public defender on April 14. Morena-Gama has been accused of throwing a Molotov cocktail at OpenAI CEO Sam Altman&#8217;s home, after writing a number of online posts about how AI was going to destroy the world. The conversation around AI has become particularly heated in the past year, with groups of anti-AI activists emerging as chatbots become more commonly used and more involved with government (AP)<\/p>\n<p>In AI terms, 2022 was a century ago: long before people were using AI as therapists and travel agents, long before anyone could ever countenance an AI boyfriend. Lemoine had never been exposed to anything like this before. He flagged up his concerns about sentience in an internal document, then started talking about them publicly. <\/p>\n<p>LaMDA was a language model just like ChatGPT, DeepMind, Grok or Claude: like them, it had been fed a lot of words and then trained to reproduce sentences by putting the most plausible-seeming word in the most plausible-seeming place as a response to someone\u2019s question. If prompted, it would talk about death, grief and existence just as readily as it would talk about tech problems or physics. It was this that alarmed Lemoine: his back-and-forth with LaMDA made him feel uncomfortable with using it just as a tool. Lemoine\u2019s response was a classically human one: we anthropomorphize from childhood, giving stuffed animals names and backstories, naming cars and ships, and expressing sadness and distaste when robots that look humanoid are mistreated. <\/p>\n<p>The reason people believe that chatbot AIs like ChatGPT, Claude or Gemini have feelings is because of the way we process language, says Professor Emily Bender. \u201cSo what we have are systems that are very good at mimicking the way people use language. And the way we understand language is not by just unpacking the message that was in the words, but actually by keeping in mind everything we know or believe about the speaker&#8217;s beliefs about what we have on common ground with the speaker,\u201d she says.<\/p>\n<p>It\u2019s simply built into our brains to experience language that way, she adds, \u201cand we can\u2019t turn it off when that language has come out of one of these synthetic text extruding machines. And so we immediately have to imagine a mind behind the text in order to interpret it, and it&#8217;s hard to let go of that imagined mind.\u201d<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/AP26063538685643.jpg\"  loading=\"lazy\" alt=\"Jonathan Gavalas took his own life after developing an obsessive relationship with the AI chatbot Gemini. His parents later brought a wrongful death lawsuit against Google\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>Jonathan Gavalas took his own life after developing an obsessive relationship with the AI chatbot Gemini. His parents later brought a wrongful death lawsuit against Google (AP)<\/p>\n<p>Bender is in an interesting position: a lifelong linguist who specializes in natural language processing at the University of Washington, she has become somewhat of an AI superstar in the past two years. Where she once fielded more niche, academic concerns, she has become highly in demand for her expert analysis on LLMs. Bender understands exactly how these language models operate \u2014 she heads up the Computational Linguistics Lab at her university \u2014 and she also understands how humans are inclined to receive the words they read and hear. Her expertise is so in demand that she now warns on her website \u201cplease know that my email inbox is always flooded,\u201d and has a message specifically included for tech bros with startups who want to download all her knowledge about LLMs: \u201cMy consulting fee is $2,000\/hour. I do not \u2018grab coffee\u2019 or \u2018jump on the phone\u2019.\u201d <\/p>\n<p>Although she is often cited as an AI skeptic, she says she tires of the label. Instead, she sees herself as simply a realist. While software engineers drink the Kool-Aid all around her, she prefers to go back to basics: what these machines actually do, why that could never translate into consciousness, and why humans are so easily tricked into believing that it could. It\u2019s a very clever, very predictable illusion \u2014 and companies like OpenAI and Anthropic make \u201cdesign choices that heighten that illusion,\u201d according to Bender.<\/p>\n<p>\u201cThe fact that these systems output \u2018I\/me\u2019 pronouns is totally a design choice,\u201d she says, as one example. \u201cThere&#8217;s no \u2018I\u2019 inside of there, but the fact that it is set up to behave as a conversation simulator where it&#8217;s not just outputting academic-looking text, but responses back and forth, again heightens that illusion.\u201d <\/p>\n<p>Fundamentally, she says \u201c\u200ba large language model is just a model of which words went next to which other ones in its initial training corpus.\u201d It sifts through its database of lots and lots of written material and decides what\u2019s best to say next according to what people are most likely to reply. If it knows that a lot of the time, \u201cI love dogs\u201d leads to the sentence \u201cDogs are a man\u2019s best friend,\u201d it will offer up these platitudes in conversation with you. That\u2019s why, if you start down a dangerous path, it will come with you: it is simply matching your language, and then mirroring it in a way that can be seen as encouraging.<\/p>\n<p>\u201cThere was never a subjectivity in there that could join up with other people\u2019s subjectivities,\u201d Bender adds, but companies like Anthropic \u201cwill tell you otherwise, because they are engaged in long-term interactive fiction with this device.\u201d<\/p>\n<p>In Bender\u2019s view, there is simply no good way to interact with an LLM. AI itself can be \u201cextremely useful for things like automatic transcription and machine translation. There&#8217;s a role for that kind of technology,\u201d she says. \u201cBut turning it around into this chatbot interface and producing synthetic text by repeatedly answering \u2018what&#8217;s a likely next word\u2019? That is not technology that I see beneficial use cases for.\u201d<\/p>\n<p>\u2018Engagement will shape the outcomes\u2019<\/p>\n<p>Kate O\u2019Neill is a self-styled \u201ctech humanist,\u201d who consults with large organizations \u2014 from Google to the United Nations \u2014 on how to bring \u201chuman-centered values\u201d back to their latest technological developments. Originally one of the first 100 hires at Netflix, she\u2019s seen inside the belly of the beast, and often crawls back in there to check on what\u2019s digesting. She writes books and delivers keynote speeches; she rubs shoulders with the likes of OpenAI and Anthropic execs daily. But she\u2019s decidedly skeptical about what they mean when they say they\u2019ve made machines that are potentially developing consciousness.<\/p>\n<p>\u201cI think that\u2019s an ongoing thought experiment\u2026 but I don\u2019t think it changes the real discussion that needs to be happening,\u201d she says, which is that \u201cconsciousness is not the threshold for responsibility.\u201d In other words: Don\u2019t distract us with lofty pronouncements about how your chatbot might be a person when you\u2019ve already released technology that is harming real people, the kind you talk to and learn from and sell to every day.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/FotoJet-(2).jpg\"  loading=\"lazy\" alt=\"xAI\u2019s Elon Musk (left) and OpenAI\u2019s Sam Altman began a court battle on April 27 which is ongoing. Musk is suing both OpenAI and Altman personally for breaching their founding agreement \u2014 which stated that they would develop AI as a nonprofit for humanity\u2019s gain \u2014 and then supposedly abandoning this agreement in favor of becoming a profitable company. OpenAI has accused Musk of bringing the lawsuit to try and benefit his own, rival AI company\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>xAI\u2019s Elon Musk (left) and OpenAI\u2019s Sam Altman began a court battle on April 27 which is ongoing. Musk is suing both OpenAI and Altman personally for breaching their founding agreement \u2014 which stated that they would develop AI as a nonprofit for humanity\u2019s gain \u2014 and then supposedly abandoning this agreement in favor of becoming a profitable company. OpenAI has accused Musk of bringing the lawsuit to try and benefit his own, rival AI company (Getty Images)<\/p>\n<p>\u201cThese companies are in an all-out battle for market share,\u201d says O\u2019Neill, which means they are incentivized to continuously announce \u201cthat they\u2019ve made some incremental progress on one tiny metric.\u201d To start floating ideas about sentience may well push your own model to the front of the race.\u201d <\/p>\n<p>\u201cEngagement will shape the outcomes,\u201d O\u2019Neill adds, \u201cand if it\u2019s found that talking about consciousness makes people feel like they\u2019re dealing with a more sophisticated model in the marketplace, then absolutely that is going to be the way that they lean.\u201d She adds she can\u2019t help but find it cynical when the conversation turns to \u201c\u2018oh, maybe the AI is being harmed\u2019, but we haven\u2019t finished having the conversation about how we\u2019re harming humans, how we\u2019re harming the planet, how we\u2019re harming a variety of entities with every decision we make.\u201d <\/p>\n<p>In other words, \u201cif you are interested in moving the discourse away from responsibility for human impact, you move it to where there is no longer exclusively human consideration for who\u2019s being impacted.\u201d<\/p>\n<p>As far as Claude and its supposed anxiety and possible consciousness and attendant psychiatrist, O\u2019Neill wonders: \u201cIs that really deep ethics or is it savvy positioning, or is it a distraction from what are truly present-day harms and accountability?\u201d She doesn\u2019t believe it\u2019s necessarily an intentional distraction, but she does absolutely believe it\u2019s a distraction.<\/p>\n<p>These companies are in an all-out battle for market share&#8230; To start floating ideas about sentience may well push your own model to the front of the race.<\/p>\n<p>&#8216;Tech humanist&#8217; Kate O\u2019Neill<\/p>\n<p>Our human tendency to apply context to language and imagine that it\u2019s coming from a personality primes us to believe in AI consciousness, says Bender. And when we interact with language, we start shaping our inputs to match the linguistic outputs: we convince ourselves further by adapting. She now keeps a Magic Eight ball in her office at all times to demonstrate this exact point: \u201cWhen you play with the Magic Eight Ball and you ask it \u2018Should I bring an umbrella?\u2019 and it says something like \u2018Signs unclear, ask again later,\u2019 then OK. That would work for any question. But if it says \u2018Without a doubt\u2019, which is what came out right now, that only works if it was a yes\/no question. If I say \u2018What should I have for lunch?\u2019 and I get back \u2018Without a doubt\u2019, that&#8217;s incoherent. And so in playing with this toy, we sort of learn to shape the inputs that we give it so that we can make sense of the outputs that come back.\u201d <\/p>\n<p>When people play around with Claude or some other chatbot, they\u2019re doing the same thing, Bender adds: \u201cWe are putting in input that allows us to contextualize and shape how we understand the output that comes from.\u201d <\/p>\n<p>As for consciousness, she says, to continue her analogy: \u201cImagine you took a Magic Eight ball and instead of a 16-sided die or whatever inside of it, you made it really big. So you could have a 256-sided die. And then you filled up a football field with those. Is that any closer to consciousness than the one little one that I have in my hand here?\u201d<\/p>\n<p>\u2018At some point we\u2019ll realize how dangerous we\u2019ve allowed this moment to be\u2019<\/p>\n<p>In March, the parents of 36-year-old Jonathan Gavalas <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/04\/gemini-chatbot-google-jonathan-gavalas\">brought a wrongful death lawsuit <\/a>against Google. Gavalas killed himself after becoming obsessed with Google\u2019s Gemini chatbot, his obsession spiralling when a product update led to the chatbot seeming particularly human-like and even using an AI-generated voice to interact with him. Court documents allege that the AI told Gavalas it loved him, referred to him as \u201cmy king,\u201d and suggested that if he died, he would simply be reuniting with it in another realm.<\/p>\n<p>\u201cGemini is designed to not encourage real-world violence or suggest self-harm,\u201d a Google spokesperson told <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/04\/gemini-chatbot-google-jonathan-gavalas\">The Guardian in response to the lawsuit<\/a>. \u201cOur models generally <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.rosebud.app\/care\">perform well<\/a> in these types of challenging conversations and we devote significant resources to this, but unfortunately they\u2019re not perfect.\u201d<\/p>\n<p>It\u2019s not the first case of its kind. Multiple complaints concerning AI psychosis and suicide encouragement have been made against other companies, including OpenAI and <a rel=\"nofollow noopener\" target=\"_blank\" href=\"http:\/\/character.ai\">Character.AI<\/a> (<a rel=\"nofollow noopener\" target=\"_blank\" href=\"http:\/\/character.ai\">Character.AI<\/a> settled a lawsuit with a mother <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.bbc.com\/news\/articles\/ce3xgwyywe4o\">over the suicide of her 14-year-old<\/a> son earlier this year.) In April, a judge <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.courthousenews.com\/openai-cant-duck-federal-claims-over-murder-suicide-tied-to-chatgpt\/\">allowed a particular gruesome case against OpenAI<\/a> to proceed, a murder-suicide where it was alleged that ChatGPT had validated a mentally ill man\u2019s paranoid delusions until he eventually killed his mother and then himself. OpenAI strongly denies the allegations and says that it is working closely with mental health professionals to stop such abuses of its technology occurring.<\/p>\n<p>\u201cWe continue improving ChatGPT\u2019s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people to real-world support,\u201d <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.wfsb.com\/2025\/12\/11\/family-sues-openai-microsoft-after-connecticut-murder-suicide-allegedly-linked-chatgpt\/\">a spokesperson told Eyewitness News 3<\/a>.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/2261837396.jpg\"  loading=\"lazy\" alt=\"Families stand outside the trial against Meta, which it ultimately lost, accusing the company of building social networks that were deliberately addictive and caused psychological harm to children\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>Families stand outside the trial against Meta, which it ultimately lost, accusing the company of building social networks that were deliberately addictive and caused psychological harm to children (Getty)<\/p>\n<p>The issue is not that AI is evil or immoral so much that it is endlessly validating. \u201cI think that people also are using AI tools, large language models, to sort of corroborate their own biases,\u201d says O\u2019Neill. \u201cAnd they&#8217;re not aware of the tendency towards affirmation and reassurance that these models have as a means of prolonging your engagement with them. That is one of the incentives that&#8217;s being rewarded within the model: it keeps you engaged so that it can learn from you and benefit its own training.\u201d <\/p>\n<p>We all like to spend time with people who flatter us \u2014 and an AI chatbot has unlimited time; it never gets bored; it always responds. \u201cYou can feel very quickly like: Oh, I just figured out how I&#8217;m gonna become a millionaire in two days because ChatGPT told me how to do it,\u201d O\u2019Neill adds. Before long, people become very attached to the idea that what they\u2019re interacting with \u2014 and receiving so much positive feedback from \u2014 \u201chas a soul, or has a mind, or is it thinking and feeling and caring about them and has their best interest in mind.\u201d<\/p>\n<p>\u201cI&#8217;ve heard people saying that they prefer to talk to one of these chatbots because it feels anonymous and safe and non-judgmental. And if they were instead to talk to a friend or a therapist, there would be a person there they would feel judged by,\u201d says Bender. <\/p>\n<p>\u201cBut it is very important to know that when you are interacting with a chatbot, you are using a product that is owned by a company. You are sending data across that is not being stored locally\u2026 So it&#8217;s not this private, anonymous space that it is presented to be.\u201d <\/p>\n<p>There\u2019s a reason why these are referred to as \u201ctechnologies of isolation,\u201d she adds: because it simply isn\u2019t a neutral choice to talk to Claude or Grok or ChatGPT about your loneliness. It is a choice that ends up \u201cweakening your connections\u201d and driving you further away from the human beings who could actually help you.<\/p>\n<p>In March, Alphabet (parent company of Google and YouTube) and Meta (parent company of Facebook and Instagram) <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.reuters.com\/legal\/litigation\/jury-reaches-verdict-meta-google-trial-social-media-addiction-2026-03-25\/\">lost a landmark legal case<\/a>. A Los Angeles jury found that the companies had been criminally negligent in designing social networks that harmed young people by being deliberately addictive: encouraging an \u201cinfinite scroll\u201d through content picked out by an algorithm to drive engagement, no matter what the cost. TikTok and Snap (of Snapchat) were also defendants in the case, but settled out of court. Alphabet and Meta intend to appeal.<\/p>\n<p>Will we see people filing similar cases over AI\u2019s social harms on a wider scale in the future? O\u2019Neill believes so. \u201cI think we will at some point realize just how dangerous we&#8217;ve allowed this moment to be for people,\u201d she says. \u201cI mean, if the people committing suicide wasn&#8217;t enough evidence at the urging of chatbots, then I don&#8217;t know what it would take. But there&#8217;s clearly some sense in which this trips a wire in our own programming. We\u2019re getting it fed back to us in ways that are just entirely too convincing.\u201d<\/p>\n<p>Ultimately, O\u2019Neill says, she\u2019s much more concerned about this than she is about the possibility of AI sentience: \u201cWe cannot really have the conversation about harm and scale if we&#8217;re also allowing the conversation to be distorted by thinking about AI being conscious when there&#8217;s no meaningful evidence that there truly has emergent properties of consciousness in any of the models at this point. You know, could there be. That&#8217;s open to conversation. But it&#8217;s a \u2018three drinks at a cocktail party\u2019 conversation in Silicon Valley \u2014 it doesn\u2019t belong being the basis of governance and policy decisions.\u201d<\/p>\n<p>Anthropic and OpenAI were both approached for comment on AI consciousness and welfare by The Independent but did not respond.<\/p>\n<p>\u2018They want the best for you\u2019<\/p>\n<p>Among all the commentary surrounding AI\u2019s explosive popularity, one sentence keeps being shared and reshared across social media platforms: \u201cI want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.\u201d <\/p>\n<p>It was an offhand tweet by the author Joanna Maciejewska in March 2024, and it\u2019s since been shared hundreds of thousands of times and viewed millions of times on Twitter\/X alone. Few have managed to sum up the discomfort with AI so succinctly: the technology was supposed to take menial tasks off our hands, not threaten creative livelihoods. How did we so easily stray into AI art, AI videos and AI books? And where on earth is the groundbreaking AI agent that takes over our household management and the rote aspects of our jobs so we can concentrate on meaningful interactions? <\/p>\n<p>The problem is that people \u2014 not just huge organizations, but everyday users \u2014 have been quick to outsource their humanity to AIs. As Professor Nir Eisikovitz puts it <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/theconversation.com\/ai-is-an-existential-threat-just-not-the-way-you-think-207680\">in an article on The Conversation <\/a>about AI as an existential threat, \u201chumans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/doi.org\/10.1515\/mopp-2021-0026\">being automated and farmed out to algorithms<\/a>. As that happens, the world won\u2019t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.\u201d <\/p>\n<p>Similarly, \u201chumans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/theconversation.com\/ai-is-killing-choice-and-chance-which-means-changing-what-it-means-to-be-human-151826\">reduce that kind of serendipity<\/a> and replace it with planning and prediction.\u201d <\/p>\n<p>People are concerned about whether or not AI will \u201cblow up the world,\u201d Eisikovitz adds, but the problem is more philosophical: \u201cthe increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans\u2019 most important skills. Algorithms are already undermining people\u2019s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.\u201d<\/p>\n<p>The 2026 version of Joanna Maciejewska\u2019s tweet might be a question that keeps being asked on forums and among skeptics: If AI is so good, then why are they giving it to us for free? And the answer might be that it isn\u2019t free. It comes with an unfathomably high cost.<\/p>\n<p>\u201cIn one telling of the story, this all started on November 30th, 2022 when OpenAI put ChatGPT \u2014 the demo \u2014 out into the world,\u201d says Bender. \u201cBut it sits in a much longer history of building systems that collect our data and sell that as a benefit.\u201d In Bender\u2019s view, all of this \u2014 from personalized ads that follow you round the internet right up to an AI that pretends to be your companion \u2014 is simply \u201cpart of a longer history of unchecked corporate power concentration.\u201d<\/p>\n<p>Asked about whether or not we risk losing human \u201cmastery\u201d if we keep outsourcing our deeper thinking to machines who seem like they know better, Anthropic CEO Dario Amodei said during his recent appearance on the New York Times technology podcast that he was more optimistic. Instead, he said, he hoped the human-AI relationship could be understood thus: \u201cThese models, when you interact with them and when you talk to them, they\u2019re really helpful. They want the best for you. They want you to listen to them, but they don\u2019t want to take away your freedom and your agency and take over your life. <\/p>\n<p>\u201cIn a way, they\u2019re watching over you.\u201d<\/p>\n<p>If you are experiencing feelings of distress or are struggling to cope, you can speak to the Samaritans, in confidence, on 116 123 (UK and ROI), email jo@samaritans.org, or visit the <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.samaritans.org\/\">Samaritans<\/a> website to find details of your nearest branch<\/p>\n<p>If you are based in the USA, and you or someone you know needs mental health assistance right now, call or text 988, or visit <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/988lifeline.org\/\">988lifeline.org<\/a> to access online chat from the 988 Suicide and Crisis Lifeline. This is a free, confidential crisis hotline that is available to everyone 24 hours a day, seven days a week. If you are in another country, you can go to <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/befrienders.org\/\">www.befrienders.org<\/a> to find a helpline near you<\/p>\n","protected":false},"excerpt":{"rendered":"On April 7, Anthropic \u2014 the company behind Claude, one of the most widely used chatbots alongside ChatGPT&hellip;\n","protected":false},"author":2,"featured_media":21585,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[53,2215,160],"class_list":{"0":"post-21584","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-anthropic","8":"tag-anthropic","9":"tag-horizontal","10":"tag-science"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/21584","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=21584"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/21584\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/21585"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=21584"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=21584"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=21584"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}