{"id":99233,"date":"2025-10-02T14:33:09","date_gmt":"2025-10-02T14:33:09","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/99233\/"},"modified":"2025-10-02T14:33:09","modified_gmt":"2025-10-02T14:33:09","slug":"chatgpt-parental-controls-dont-mean-kids-need-ai-companions","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/99233\/","title":{"rendered":"ChatGPT parental controls don\u2019t mean kids need AI companions"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The number of kids getting hurt by AI-powered chatbots is hard to know, but it\u2019s not zero. Yet, for nearly three years, ChatGPT has been free for all ages to access without any guardrails. That sort of changed on Monday, when <a href=\"https:\/\/openai.com\/index\/introducing-parental-controls\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI introduced a suite of parental controls<\/a>, some of which are designed to prevent teen suicides \u2014 like that of Adam Raine, a 16-year-old Californian who died by suicide <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" rel=\"nofollow noopener\" target=\"_blank\">after talking to ChatGPT at length<\/a> about how to do it. Then, on Tuesday, OpenAI launched a social network with a <a href=\"https:\/\/openai.com\/sora\/\" rel=\"nofollow noopener\" target=\"_blank\">new app called Sora<\/a> that <a href=\"https:\/\/www.platformer.news\/sora-2-hands-on-openai-social-network\/\" rel=\"nofollow noopener\" target=\"_blank\">looks a lot like TikTok<\/a>, except it\u2019s powered by \u201chyperreal\u201d AI-generated videos.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It was surely no accident that OpenAI announced these parental controls alongside an ambitious move to compete with Instagram and YouTube. In a sense, the company was releasing a new app designed to get people even more hooked on AI-generated content but softening the blow by giving parents slightly more control. The new settings apply primarily to ChatGPT, although parents have the option to impose limits on what their kids see in Sora.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">And the new <a href=\"https:\/\/www.axios.com\/2025\/09\/29\/chatgpt-openai-parental-controls\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT controls<\/a> aren\u2019t exactly straightforward. Among other things, parents can now connect their children\u2019s accounts to theirs and add protections against sensitive content. If at any point OpenAI\u2019s tools determine there\u2019s a serious safety risk, a human moderator will review it and send a notification to the parents if necessary. Parents cannot, however, read transcripts of their child\u2019s conversations with ChatGPT, and the teen can disconnect their account from their parents at any time (OpenAI says the parent will get a notification).<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">We don\u2019t yet know how all this will play out in practice, and something is bound to be better than nothing. But is OpenAI doing everything it can to keep kids safe?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup _1iohv3z2 xkp0cg9\">Even adults have problems regulating themselves when AI chatbots offer a cheerful, sycophantic friend available to chat every hour of the day.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Several experts I spoke to said no. In fact, OpenAI is ignoring the biggest problem of all: Chatbots that are programmed to act as companions, providing emotional support and advice to kids. Presumably, the new ChatGPT safety features could intervene in future potential tragedies, but it\u2019s unclear how OpenAI will be able to identify when AI companions take a dark turn with young users, <a href=\"https:\/\/counterhate.com\/research\/fake-friend-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">as they tend to do<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cWe\u2019ve seen in a lot of cases for both teens and adults that falling into dependency on AI can be accidental,\u201d <a href=\"https:\/\/www.commonsensemedia.org\/bio\/robbie-torney\" rel=\"nofollow noopener\" target=\"_blank\">Robbie Torney<\/a>, Common Sense Media\u2019s senior director of AI programs told me. \u201cA lot of people who have become dependent on AI didn\u2019t set out to be dependent on AI. They started using AI for homework help or for work, and slowly slipped into using it for other purposes.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Again, even adults have problems regulating themselves when AI chatbots offer a cheerful, sycophantic friend available to chat every hour of the day. You may have read recent reports of adults who developed increasingly intense relationships with AI chatbots <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" rel=\"nofollow noopener\" target=\"_blank\">before suffering psychotic breaks<\/a>. This kind of synthetic relationship represents a new frontier for technology as well as the human brain.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It\u2019s frightening to think what could happen to kids, whose prefrontal cortices <a href=\"https:\/\/pursuit.unimelb.edu.au\/articles\/Teen-brains-are-wired-to-take-risks-but-that-can-be-a-good-thing\" rel=\"nofollow noopener\" target=\"_blank\">have yet to fully develop<\/a>, making them especially vulnerable. More than 70 percent of teens are using <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" rel=\"nofollow noopener\" target=\"_blank\">AI chatbots for companionship<\/a>, which presents dangers to them that are \u201creal, serious, and well documented,\u201d according to a recent Common Sense Media survey. That\u2019s why AI companion apps, like <a href=\"http:\/\/character.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Character.ai<\/a>, already have some restrictions by default for young users.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">There\u2019s also the broader problem that parental controls put the onus of protecting kids on parents, rather than on the tech companies themselves. It\u2019s usually up to parents to dig into their settings and flip the switches. And then it\u2019s still up to parents to keep track of how their kids are using these products, and in the case of ChatGPT, how dependent they\u2019re getting on the chatbot. The situation is either confusing enough or laborious enough that most parents simply <a href=\"https:\/\/fosi.org\/research\/connected-and-protected-insights-from-fosis-2025-online-safety-survey\/\" rel=\"nofollow noopener\" target=\"_blank\">don\u2019t use parental controls<\/a>.<\/p>\n<p><strong>The real goal of the parental control<\/strong>s<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It\u2019s worth pointing out that OpenAI rolled out these controls and the new app as a major AI safety bill sat on California Gov. Gavin Newsom\u2019s desk, awaiting his signature. <a href=\"https:\/\/www.nytimes.com\/2025\/09\/29\/technology\/california-ai-safety-law.html\" rel=\"nofollow noopener\" target=\"_blank\">Newsom signed the bill into law<\/a> the same day as the parental control announcement. The OpenAI news was also on the heels of Senate hearings on the negative impacts of AI chatbots, during which parents <a href=\"https:\/\/futurism.com\/parents-testifying-us-senate-ai-children\" rel=\"nofollow noopener\" target=\"_blank\">urged lawmakers to impose stronger regulations<\/a> on companies like OpenAI.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cThe real goal of these parental tools, whether it\u2019s ChatGPT or Instagram, is not actually to keep kids safe,\u201d said <a href=\"https:\/\/fairplayforkids.org\/about-us\/staff\" rel=\"nofollow noopener\" target=\"_blank\">Josh Golin<\/a>, the executive director of Fairplay, a nonprofit children\u2019s advocacy group. It is to say that self-regulation is fine, please. You know, \u2018Don\u2019t regulate us, don\u2019t pass any laws.\u2019\u201d Golin went on to describe OpenAI\u2019s failure to do anything about the trend of children developing emotional relationships with ChatGPT as \u201cdisturbing.\u201d (I reached out to OpenAI for comment but didn\u2019t get a response.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">One way around tasking parents with managing all of these settings would be for OpenAI to have safety guardrails on by default. And the company says it\u2019s working on something that does a version of that. In the future, it says, after a certain amount of input, ChatGPT will be able to determine the age of a user and add safety features. For now, kids can access ChatGPT by typing in their birthday \u2014 or making one up \u2014 whenever they create an account.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">You can try to interpret OpenAI\u2019s strategy here. Whether it\u2019s trying to push back against regulation or not, parental controls introduce some friction into teens using ChatGPT. They\u2019re a form of content moderation, one that also impacts teen users\u2019 privacy. The company would also, presumably, like these teens to keep using ChatGPT and Sora when they become adults, so they don\u2019t want to degrade the experience too much. Allowing teens to do more on these apps rather than less is good for business, to a point.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup _1iohv3z2 xkp0cg9\">\u201cThere isn\u2019t a parental control that\u2019s going to make something completely safe.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This all leaves parents with a difficult situation. They need to know their kid is using ChatGPT, for starters, and then figure out which settings will be enough to keep their kids safer but not too strict that the kid just creates a burner account pretending to be an adult. There\u2019s seemingly no way to stop kids from developing an emotional attachment to these chatbots, so parents will just have to talk to their kids and hope for the best. Then there\u2019s whatever awaits with the Sora app, which looks designed to churn out high-quality AI slop and get kids addicted to yet another endless feed.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cThere isn\u2019t a parental control that\u2019s going to make something completely safe,\u201d Leslie Tyler, director of parent safety at Pinwheel, a company that makes parental control software. \u201cParents can\u2019t outsource it. Parents still have to be involved.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In a way, this moment represents a second chance for the tech industry and for policymakers. <a href=\"https:\/\/www.hhs.gov\/surgeongeneral\/reports-and-publications\/youth-mental-health\/social-media\/index.html\" rel=\"nofollow noopener\" target=\"_blank\">Two decades of unregulated social media apps<\/a> have cooked all of our brains, and there\u2019s growing evidence that it contributed to a mental health crisis in young people. Companies like <a href=\"https:\/\/www.wsj.com\/tech\/personal-tech\/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739?gaa_at=eafs&amp;gaa_n=ASWzDAgaU2ZTrKSSRF2MOsWMO-cimKgJ3N7SBh6iYw8Nu3cZ2Zc_BbHAj9su3I2h_a8%3D&amp;gaa_ts=68dd6c2e&amp;gaa_sig=y7wfL_roimQc0vlu4sdgbkZJPeS_dQxG8Cn6Z4dK4rNCp3KQe43SW1wUy5VoxkMZU81A35yqKC6l-zMpcFsBEw%3D%3D\" rel=\"nofollow noopener\" target=\"_blank\">Meta<\/a> and <a href=\"https:\/\/www.npr.org\/2024\/10\/11\/g-s1-27676\/tiktok-redacted-documents-in-teen-safety-lawsuit-revealed\" rel=\"nofollow noopener\" target=\"_blank\">TikTok<\/a> knew their products were harming kids and, for a long time, did nothing about it for years. Meta now has Teen Accounts for Instagram, but recent research <a href=\"https:\/\/counterhate.com\/research\/fake-friend-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">suggests the safety features just don\u2019t work<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Whether too little or too late, OpenAI is taking its turn at keeping kids safe. Again, doing something is better than nothing.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">A version of this story was also published in the User Friendly newsletter. <a href=\"https:\/\/www.vox.com\/pages\/user-friendly-tech-newsletter-signup\" rel=\"nofollow noopener\" target=\"_blank\"><strong>Sign up here<\/strong><\/a> so you don\u2019t miss the next one!<\/p>\n","protected":false},"excerpt":{"rendered":"The number of kids getting hurt by AI-powered chatbots is hard to know, but it\u2019s not zero. Yet,&hellip;\n","protected":false},"author":2,"featured_media":99234,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,289,290,1056,18,29602,9274,19,3536,17,3255,63229,82,63230],"class_list":{"0":"post-99233","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-big-tech","12":"tag-eire","13":"tag-emerging-tech","14":"tag-even-better","15":"tag-ie","16":"tag-innovation","17":"tag-ireland","18":"tag-life","19":"tag-tech-policy","20":"tag-technology","21":"tag-technology-media"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/99233","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=99233"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/99233\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/99234"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=99233"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=99233"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=99233"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}