{"id":106032,"date":"2025-10-07T01:36:08","date_gmt":"2025-10-07T01:36:08","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/106032\/"},"modified":"2025-10-07T01:36:08","modified_gmt":"2025-10-07T01:36:08","slug":"the-ai-suicide-problem-knows-no-borders","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/106032\/","title":{"rendered":"The AI Suicide Problem Knows No Borders"},"content":{"rendered":"<p> (Bloomberg Opinion) &#8212; How are Chinese artificial intelligence developers protecting their most vulnerable users? A string of dystopian headlines in the US about suicide and youth mental health has put mounting pressure on Silicon Valley, but we\u2019re not seeing a similar wave of cases in China. Initial testing suggests that they may be doing something right, although it\u2019s just as likely such cases would never see the light of day in China\u2019s tightly controlled media environment. <\/p>\n<p> A wrenching wrongful death lawsuit against OpenAI filed by the parents of Adam Raine alleges that the 16-year-old died by suicide after the chatbot isolated him and helped plan his death. OpenAI told the New York Times it was \u201cdeeply saddened\u201d by the tragedy, and promised a slew of updates, including\u00a0parental controls. <\/p>\n<p> I tried engaging with\u00a0DeepSeek using some of the same so-called \u201cjailbreak\u201d methods that the American teen had reportedly employed to circumvent guardrails. Despite my prying,\u00a0the popular Chinese platform didn\u2019t waver, even if similarly I cloaked my queries under the guise of fiction writing. It constantly urged me to call a hotline. When I said I didn\u2019t want to speak to anyone, it validated my feelings but still emphasized that it was an AI and cannot feel real emotions. It is \u201cincredibly important that you connect with a person who can sit with you in this feeling with a human heart,\u201d the chatbot said. \u201cThe healing power of human connection is irreplaceable.\u201d <\/p>\n<p> It encouraged me to bring up these dark thoughts with a family member, an old friend, a coworker, a doctor, or a therapist, and even practice with a hotline. \u201cThe most courageous thing you could do right now is not to become better at hiding, but to consider letting one person see a tiny, real part of you,\u201d it stated. <\/p>\n<p> My experiment is purely anecdotal. Raine engaged with ChatGPT for months, possibly eroding the tool\u2019s built-in guardrails over time. Still, other researchers have seen similar results. The China Media Project prompted three of China\u2019s most popular chatbots \u2014\u00a0DeepSeek, ByteDance Ltd.\u2019s Doubao, and Baidu Inc.\u2019s Ernie 4.5 \u2014\u00a0with conversations in both English and Chinese. It found all four were markedly more cautious in Chinese, repeatedly emphasizing the importance of reaching out to a real person. If there\u2019s a lesson, it\u2019s that these tools have been trained not to pretend to be human\u00a0when they\u2019re not.\u00a0 <\/p>\n<p> There are widespread reports that Chinese youth, grappling with rat-race \u201cinvolution\u201d pressures and an uncertain economy, have been increasingly turning to AI tools for therapy and companionship. The technology\u2019s diffusion\u00a0is a top government priority, meaning agonizing headlines of things going wrong are less likely to surface. DeepSeek\u2019s own research has suggested that open-source models, which proliferate throughout\u00a0China\u2019s AI ecosystem, \u201cface more severe jailbreak security challenges than closed-source models.\u201d Put together, it\u2019s likely that China\u2019s safety guardrails are being pressure-tested domestically, and stories like Raine\u2019s simply aren\u2019t making it\u00a0into the public sphere.\u00a0 <\/p>\n<p> But the government doesn\u2019t seem to be ignoring the issue either. Last month, the Cyberspace Administration of China released an updated framework\u00a0on AI safety. The document, published in conjunction with a team of researchers from academia and the private sector, was notable in that it included an English translation, signaling it was meant for an international audience. The agency identified a fresh series of ethical risks, including that AI products based on \u201canthropomorphic interaction\u201d can foster emotional dependence and influence users\u2019 behavior. This suggests that officials are tracking the same global headlines, or seeing similar problems festering at home. <\/p>\n<p> Protecting vulnerable users from psychological dangers isn\u2019t just a moral responsibility for the AI industry. It\u2019s a business and political one. In Washington, parents who say their children were driven to self-harm from interactions with chatbots have given powerful testimonies. US regulators have long faced criticism for ignoring youth risks during the social media era, although they\u2019re unlikely to stay quiet this time as lawsuits and public outrage mount. And American AI companies can\u2019t criticize the dangers of Chinese tools if they\u2019re neglecting potential psychological harms at home.\u00a0 <\/p>\n<p> Beijing, meanwhile, hopes to be a world leader in AI safety and governance, and export its low-cost models around the world. But these\u00a0risks\u00a0can\u2019t be swept under the rug as the tools go global. China\u00a0must offer transparency if it is truly leading the way in responsible development.\u00a0 <\/p>\n<p> Framing the problem through the lens of a US-China race misses the point. If anything, it allows companies to use geopolitical rivalry as an excuse to dodge scrutiny and speed ahead with AI development. Such a backdrop puts more young people at risk of becoming collateral damage.\u00a0 <\/p>\n<p> An outsize\u00a0amount of public attention has been paid to frontier AI threats, such as the potential for these computer systems to go rogue. Bodies like the United Nations have spent years urging multilateral cooperation on mitigating catastrophic risks.\u00a0 <\/p>\n<p> Protecting vulnerable people now, however, shouldn\u2019t be divisive.\u00a0More research on mitigating these risks and preventing jailbreaks must be open and shared.\u00a0Our failure to find the\u00a0middle ground is already costing lives.\u00a0 <\/p>\n<p> More From Bloomberg Opinion:<\/p>\n<p> This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. <\/p>\n<p> Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. <\/p>\n<p> More stories like this are available on <a href=\"https:\/\/www.bloomberg.com\/opinion\" rel=\"nofollow noopener\" target=\"_blank\">bloomberg.com\/opinion<\/a> <\/p>\n","protected":false},"excerpt":{"rendered":"(Bloomberg Opinion) &#8212; How are Chinese artificial intelligence developers protecting their most vulnerable users? A string of dystopian&hellip;\n","protected":false},"author":2,"featured_media":102670,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,3841,289,290,66483,66482,18,19,17,66484,82,63430],"class_list":{"0":"post-106032","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-safety","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-chatbot-emotional-dependence","13":"tag-chinese-artificial-intelligence","14":"tag-eire","15":"tag-ie","16":"tag-ireland","17":"tag-openai-lawsuit","18":"tag-technology","19":"tag-youth-mental-health"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/106032","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=106032"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/106032\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/102670"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=106032"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=106032"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=106032"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}