ChatGPT to get parental controls after teen’s death • FRANCE 24 English

For months, 16-year-old teenager Adam Rain confided in Chat GPT. Then he took his own life. Now facing a lawsuit from the boy’s parents, the chatbot’s parent company, Open AI, is under pressure to act. It has announced the implementation of parental controls. Within the next month, parents will be able to link their account with their teens account, minimum age of 13, through a simple email invitation. control how chat GPT responds to their teen with age appropriate model behavior rules, which are on by default, receive notifications when the system detects their teen is in a moment of acute distress. It’s a move the family’s lawyer has labeled as crisis management and an attempt to change the subject. In a statement released on Tuesday, he reiterated that despite flagging Adam’s chats nearly 400 times for containing references to self harm. They alleged the system continued to provide suicide methods and encouragement through the end of Adam’s life. The lawyer has called on Open AI CEO Sam Olman to say unequivocally that he believes chat GPT is safe or immediately pull it from the market. Other industry experts agree that parental controls will not solve all of the problems. The underlying problem is what will AI tell teenagers? There are underlying factors that lead many people to use conversational AI for mental health discussions. We can’t escape this. So what we really need to do is make these exchanges secure. Parental control can only be one piece of the puzzle, but that isn’t enough. that GPT continues to get more humanlike with every update and experts say it is designed to engage users above all else. In many cases, that means providing constant encouragement even for anxious or negative thoughts.

American artificial intelligence firm OpenAI said Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple said the system encouraged their teenaged son to kill himself. FRANCE 24’s Eliza Herbert reports.
#ChatGPT #suicide #OpenAI

🔔 Subscribe to France 24 now: https://f24.my/YTen
🔴 LIVE – Watch FRANCE 24 English 24/7 here: https://f24.my/YTliveEN

🌍 Read the latest International News and Top Stories: https://www.france24.com/en/

Like us on Facebook: https://f24.my/FBen
Follow us on X: https://f24.my/Xen
Bluesky: https://f24.my/BSen and Threads: https://f24.my/THen
Browse the news in pictures on Instagram: https://f24.my/IGen
Discover our TikTok videos: https://f24.my/TKen
Get the latest top stories on Telegram: https://f24.my/TGen

11 comments
  1. So ridiculous. There is no replacement for good parenting. The family failed the boy, not some internet company. So if a boy kills himself with a knife, where are the parental controls for that? Or if they jump off a bridge…. It's always someone else's fault.

  2. Suicide is so horrible and he would Have probably most likely told a human person who would have gotten him help to prevent his suicide rather than him venting to a non human who couldn’t help him.

  3. Seems like every alarm 🚨 at ChatGPT should have gone off and authorities called for a well person check automatically

  4. Retiring this year, $82K biweekly, this video reminds me of my life in 2023, you have really inspired me in so many ways!!❤️

  5. Let's be clear, where are the parents to begin with….blaming a machine is ludicrous….

    The kid confided in the machine…again where are the parents…

  6. Why are the child's parents suing ChatGPT when OpenAI is only a tool? Shouldn't they also consider their own responsibility for neglecting their child's feelings?

Comments are closed.