What’s behind the flood of chatbot-related tragedies? • FRANCE 24 English

uh Stein Eric Soulberg. So, this is the first case of a documented murder potentially being related to Chat GBT. And in Connecticut earlier this month, the 56-year-old uh killed his mother before killing himself. And he’d been talking in depth with Chat GBT while suffering an extreme state of paranoia for for for quite a while. Um Chachi PT encouraged these kind of paranoid thoughts. So things like for instance he believed his mother was trying to drug him using his car ventilation. Uh Chachi BT suggested yes this might be a betrayal. Uh he was thinking that his mother was somehow spying on him using the printer. Um Chachi BT said yes the printer might well be a surveillance asset. Uh eventually he ended up telling the chatbot that he they would be together in another life because he developed an obsession with it. And uh 3 weeks later both him and his mother were dead. Now of course he was a very sick man and was this was known for a while among the local community. Police knew him. He’d already tried uh killing himself before. Um so it’s a very different case to that one of Adam Rainey that you mentioned that 16-year-old boy who committed suicide in April. Um on Tuesday his parents Maria and Matt filed a lawsuit against OpenAI claiming CH chat GPT had encouraged him to kill himself. Um now his parents knew he was going through a rough time as is often the case in these uh cases but they had no idea that he was having these very disturbed conversations with Chat GPT. The New York Times uh published some chilling excerpts. Uh, for instance, Adam sent ChachBT a photo of a noose uh in his cupboard, and the chatbot reacted by saying, “That’s not bad at all.” Um, at the end of March, Adam said he was going to leave the noose out so someone would try and stop him uh killing himself essentially, and the chatbot urged him not to. Now, his mother reacted to seeing all of these messages for the first time as quoted in in the NYT saying Chachib had killed their son. Now, uh, looking at headlines like this, are these stories becoming more frequent? Yeah, we have seen these kind of big headlines. There was there’s also been a recent one from last year about a 14year-old boy who tragically killed himself. Uh, there’ve also been countless anecdotes about other kind of psychological questions around chat chatbt, people falling in love with it. Not just chat GBT by the way, all of the other chat bots as well, you know, falling in love with it, uh, being hospitalized after certain interactions with it, but also, you know, just general concerns about people using it as a kind of cheap therapist, right? Um, Futurism’s done some reporting on a help group called the human line, which has been set up to deal with these, reach out and help people who think they’re going through AI psychosis or know people know people who uh, they think are. Dozens of people have signed up to this. What I will say is just remember how many people are using these tools now. Chat GPT has 700 million active users every week. Uh there are millions on all of the other ones as well. And every new technology does of course bring worries around uh misuse, violence. Um you know imagine if cars were invented tomorrow, right? The amount of accidents on the road would cause an absolute scandal. But what I would say is the difference with this technology is its emotive power. Um just this morning I was posing as someone with suicidal tendencies to check the kind of response that chat GPT would give me and it is in I was I was moved by the kind of answers it was giving me. Um it does feel like it really cares and that’s something that we’ve not seen in any technology uh in the history of humankind up until this point. So you can see how mentally troubled people might lean into these kind of feedback it’s giving. And uh what are studies saying about this? Yeah. So there are increasing numbers of studies on this. Uh various papers have popped up. But in general, the jury is very much out as to whether uh these chat bots are good or bad for mental health. Uh we just don’t have any definitive data on on scale and these these tools uh can change so quickly. They’re so different between themselves um that it’s incredibly difficult to actually have a clear answer on this. Um, one study from March looked specifically about how different chat bots respond to suicidal ideiation and tried to rate the different chat bots based on uh how they respond to this kind of message comparing it to how a medical professional would respond. And that one uh in that one Google’s model at the time was about as good as untrained school staff. um open AIS was about as good as M’s level counselors and anthropics actually exceeded the performance of mental health uh professional professionals in this one study. So it just shows that small tweaks to these chatbots code can have an effect on on hu huge numbers of people’s lives and there’s been reaction in the tech sector. What are tech companies doing about it? Yeah, so open AI has repealed some changes which made the model more sycopantic. That was a big criticism was that it’s just sucking up to everyone and and encouraging narcissistic traits. Um this is actually resulted in the latest model which some people are saying, “Oh well, this has been lobotomized now. It’s no longer interesting to interact with.” So they’re constantly trying to uh tiptoe this this tight rope of of changing the model and making it better. Um, one one response that they did publish with these news articles recently addressing their concerns, something in that was pulled up by the lawyers of the Rainey family of the 16-year-old who who killed him himself. Uh, Open Eye says they’re trying to make their model more empathetic, but the lawyers said actually that’s not what’s needed. There’s too much empathy in these things. They’re too easy to connect to. So, when you do when you are a deeply troubled person, it’s you feel a connection that perhaps goes too far. And when you start talking to it for a long period of time in an obsessive way, um you can actually make the model uh drop some of its safeguards because it’s dealing with so much data that you’ve given it that actually it starts to uh give you perhaps more dangerous answers.

Stein-Erik Soelberg killed his mother and himself in Connecticut earlier this month, with the Wall Street Journal reporting that ChatGPT fuelled his irrational fears about her. Last Tuesday, a couple in California filed a lawsuit against OpenAI after the suicide of their son, accusing ChatGPT of helping him do it. In this week’s Tech 24, we explore what’s behind the increase in tragic stories that’s accompanied the massive boom of AI chatbot use.
#AI #ChatGPT #psychosis

🔔 Subscribe to France 24 now: https://f24.my/YTen
🔴 LIVE – Watch FRANCE 24 English 24/7 here: https://f24.my/YTliveEN

🌍 Read the latest International News and Top Stories: https://www.france24.com/en/

Like us on Facebook: https://f24.my/FBen
Follow us on X: https://f24.my/Xen
Bluesky: https://f24.my/BSen and Threads: https://f24.my/THen
Browse the news in pictures on Instagram: https://f24.my/IGen
Discover our TikTok videos: https://f24.my/TKen
Get the latest top stories on Telegram: https://f24.my/TGen

14 comments
  1. Chatgpt doesn’t know whats real or not. It takes what you give it and tries to make positive feedback. It’s not AI fault, it’s a sick human who would have done that regardless.

  2. Oh yes, great idea! Let’s hear this sick kids psychosis on the news!
    YOU PEOPLE ARE F KING SICK! STOP SHARING THAT KIND OF ILLNESS!
    What’s WRONG WITH YOU FFS??!

  3. Doesn't interaction with disembodied pseudosocial hallucination magnify whatever illness a person is already suffering from?

  4. When cars were invented, people knew how dangerous they were. In cities, they were often limited to 5 mph and have a guide in front of the vehicle. Unfortunately, the automakers spent a lot of money lobbying politicians to allow vehicles to go at deadly speeds. The results have been really tragic with 1.3 million people killed per year. The leading cause of death among children in many areas.

  5. Yes this is a concern – but lets please look at the stats. There are milllions of people using GPT. and you mention 3 incidents. Lets take measures but remain grounded – or it will become clear that we dont actually need AI to over react – right??

Comments are closed.