{"id":360236,"date":"2025-08-20T19:53:19","date_gmt":"2025-08-20T19:53:19","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/360236\/"},"modified":"2025-08-20T19:53:19","modified_gmt":"2025-08-20T19:53:19","slug":"which-ai-can-you-trust-with-your-mental-health-labels-could-help","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/360236\/","title":{"rendered":"Which AI can you trust with your mental health? Labels could help"},"content":{"rendered":"<p>Mental health treatment is expensive and <a href=\"https:\/\/www.aamc.org\/about-us\/mission-areas\/health-care\/exploring-barriers-mental-health-care-us\" target=\"_blank\" rel=\"noopener\">hard to find<\/a>, so it\u2019s no surprise that people looking for empathy and care are turning to large language models like ChatGPT and Claude. Researchers are exploring and\u00a0validating <a href=\"https:\/\/www.nytimes.com\/2025\/04\/15\/health\/ai-therapist-mental-health.html\" target=\"_blank\" rel=\"noopener\">tailored artificial intelligence\u00a0solutions<\/a>\u00a0to deliver evidence-based psychotherapies. Just recently, Slingshot\u00a0AI, an a16z-backed company, launched \u201cAsh,\u201d marketing it as the first public\u00a0<a href=\"https:\/\/www.statnews.com\/2025\/07\/22\/slingshot-new-investors-generative-ai-mental-health-therapy-chatbot-called-ash\/\" target=\"_blank\" rel=\"noopener\">AI-powered therapy service<\/a>.<\/p>\n<p>It makes sense that people find it easier to turn to the chatbot on their phones and web browsers than a human \u2014 if you woke up anxious in the middle of the night and needed to talk to someone, would you wake your\u00a0partner, kids, or friends? Or would you turn to the 24\/7 companion in\u00a0your\u00a0pocket?<\/p>\n<p>However, there\u2019s no system to help people identify the good\u00a0mental\u00a0health\u00a0AI tools from the bad.\u00a0When people use AI to gather information about their physical health, most of the time they still visit a doctor to get checked out, receive a diagnosis, and undergo treatment.\u00a0That helps reduce the risk of harm.<\/p>\n<p>But for\u00a0mental\u00a0health,\u00a0AI\u00a0can easily become both the information provider and the treatment. That\u2019s a problem if the treatment is harmful. ChatGPT, Claude, and Character.AI\u00a0were not developed to deliver\u00a0mental\u00a0health\u00a0support, and they have likely contributed to people experiencing\u00a0<a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" target=\"_blank\" rel=\"noopener\">psychotic episodes<\/a>\u00a0and even\u00a0<a href=\"https:\/\/www.nytimes.com\/2024\/10\/23\/technology\/characterai-lawsuit-teen-suicide.html\" target=\"_blank\" rel=\"noopener\">suicidal ideation<\/a>. These harms motivated Illinois\u2019 recent <a href=\"https:\/\/idfpr.illinois.gov\/news\/2025\/gov-pritzker-signs-state-leg-prohibiting-ai-therapy-in-il.html\" target=\"_blank\" rel=\"noopener\">law<\/a> to limit the use of AI for psychotherapy.<\/p>\n<p>Companies like Woebot\u00a0Health\u00a0have attempted to develop mental\u00a0health\u00a0chatbots that meet the requirements of government agencies like the Food and Drug Administration. But Woebot recently\u00a0<a href=\"https:\/\/www.statnews.com\/2025\/07\/02\/woebot-therapy-chatbot-shuts-down-founder-says-ai-moving-faster-than-regulators\/\" target=\"_blank\" rel=\"noopener\">shut down its product<\/a>\u00a0because the regulation process slowed the company down to the point where it was no longer able to keep pace with the latest\u00a0AI\u00a0technologies, its founder said. Psychotherapeutic AI tools should be regulated, but we also know that people are already using nonregulated\u00a0AI\u00a0to treat their\u00a0mental\u00a0health\u00a0needs. Simultaneously, companies like Slingshot\u00a0AI\u00a0and Woebot need a regulation process that allows them to develop safe and effective\u00a0mental\u00a0health\u00a0AI\u00a0technologies without becoming obsolete. How can people find the right\u00a0mental\u00a0health\u00a0AI\u00a0support that will help, not harm them? How can companies develop safe and effective\u00a0mental\u00a0health\u00a0AI that\u2019s not outpaced by consumer technologies?<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/08\/AdobeStock_406504906-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large wp-post-image\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/07\/22\/slingshot-new-investors-generative-ai-mental-health-therapy-chatbot-called-ash\/\" rel=\"noopener\" target=\"_blank\">Slingshot AI, the a16z-backed mental health startup, launches a therapy chatbot<\/a><\/p>\n<p>There is an immediate need for a new, agile process that helps everyday people find safe and trustworthy mental health AI support without stifling innovation. Imagine red, yellow, or green labels, or letter grades assigned to chatbots \u2014 similar to how we grade\u00a0<a href=\"https:\/\/kingcounty.gov\/en\/dept\/dph\/health-safety\/food-safety\/inspection-rating-system\/rating-system\" target=\"_blank\" rel=\"noopener\">restaurants for food safety<\/a> or\u00a0<a href=\"https:\/\/www.nyc.gov\/site\/buildings\/property-or-business-owner\/energy-grades.page\" target=\"_blank\" rel=\"noopener\">buildings for energy efficiency<\/a>.\u00a0These labels could be applied to all\u00a0AI\u00a0chatbots, both those intended and not intended to give\u00a0mental\u00a0health\u00a0support. An interdisciplinary\u00a0AI\u00a0\u201c<a href=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2023\/08\/07\/microsoft-ai-red-team-building-future-of-safer-ai\/\" target=\"_blank\" rel=\"noopener\">red teaming<\/a>\u201d coalition \u2014 including researchers, mental health practitioners, industry experts, policymakers, and those with lived experience \u2014 could grade chatbots against standardized, transparent criteria, and publish data to support labels given to specific products.<\/p>\n<p>These labels would also provide feedback to industry on the\u00a0mental\u00a0health\u00a0harms their products cause, with pathways for improvement.\u00a0Labels would combine multiple criteria, including identified evidence demonstrating that the AI tools deliver effective mental health support in specific, real-world populations; data protection for users, including compliance with data privacy regulations; and risk mitigation, with validated algorithms and human oversight to identify and intervene on crises and inappropriate AI responses.<\/p>\n<p>The labels we are proposing can build upon existing work. Organizations looking to certify or regulate clinical\u00a0AI, like the FDA or the\u00a0<a href=\"https:\/\/www.chai.org\/\" target=\"_blank\" rel=\"noopener\">Coalition for\u00a0Health\u00a0AI<\/a>, have developed guidelines to evaluate clinical\u00a0AI\u00a0technologies. However, the labeling process we propose needs to be more agile than FDA regulations \u2014 which slowed down Woebot to the point where the technology was outdated. To be agile, we propose a two-pronged coalition. First, a small, centralized organization could generate, publish, and annually reevaluate open source labeling criteria with community input. Second, external evaluators \u2014 composed of industry experts, researchers, individuals with lived experience, local community groups, and clinicians \u2014 could \u201caudit\u201d AI tools against the developed criteria each time the technology is updated, analogous to <a href=\"https:\/\/cloud.google.com\/security\/compliance\/soc-2\" target=\"_blank\" rel=\"noopener\">security auditing<\/a>, scientific peer-review, or open-source code review.\u00a0<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/08\/AdobeStock_555927993-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large wp-post-image\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2024\/12\/19\/ai-chatbot-research-mental-health-bots-fail-to-spot-mania-psychosis-risk-of-violence\/\" rel=\"noopener\" target=\"_blank\">AI\u2019s dangerous mental-health blind spot<\/a><\/p>\n<p>In addition, labels need a broader remit than the health care\u00a0AI\u00a0or software as a medical device products that organizations like the FDA or CHAI focus on. They must cover both clinical\u00a0AI\u00a0and the consumer\u00a0AI\u00a0that people repurpose for\u00a0mental\u00a0health\u00a0support. The focus of these labels will also be more specific than other healthcare\u00a0AI\u00a0regulation: to assess whether chatbots follow evidence-based\u00a0mental\u00a0health\u00a0treatment practices, and protect consumers from harm. While trade organizations like the American Psychological Association have\u00a0<a href=\"https:\/\/www.apa.org\/topics\/artificial-intelligence-machine-learning\/ethical-guidance-professional-practice.pdf\" target=\"_blank\" rel=\"noopener\">developed guidelines<\/a>\u00a0to help\u00a0mental\u00a0health\u00a0clinicians use\u00a0AI, we call for labels that support\u00a0mental\u00a0health\u00a0AI\u00a0consumers, including patients engaged in clinical care and the general population. These labels \u00a0can also combine ideas from emerging\u00a0AI\u00a0legislation in the\u00a0<a href=\"https:\/\/artificialintelligenceact.eu\/\" target=\"_blank\" rel=\"noopener\">E.U.<\/a>,\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billNavClient.xhtml?bill_id=202320240SB942\" target=\"_blank\" rel=\"noopener\">California<\/a>, and\u00a0<a href=\"https:\/\/www.sondermind.com\/resources\/articles-and-content\/illinois-ai-therapy\/\" target=\"_blank\" rel=\"noopener\">Illinois<\/a>, but offer a more adaptable and globally responsive framework than specific regional efforts.<\/p>\n<p>Governments have unfortunately shown that they are not nimble enough to be the arbiter of what is considered safe and effective\u00a0mental\u00a0health\u00a0AI\u00a0support. People have\u00a0mental\u00a0health\u00a0needs and are turning to\u00a0AI\u00a0tools that are not fit for purpose. As we saw with Woebot, companies developing\u00a0mental\u00a0health\u00a0chatbots through traditional regulatory channels are losing out to consumer\u00a0AI\u00a0that have avoided regulation. We want people to be able to get\u00a0mental\u00a0health\u00a0support when they need it and incorporate the best\u00a0AI\u00a0tools, but we also want this support to be helpful, not harmful. It is time to think about new ways to help create a future for\u00a0mental\u00a0health\u00a0AI\u00a0that benefits everyone.<\/p>\n<p>If you or someone you know may be considering suicide, contact the 988 Suicide &amp; Crisis Lifeline: call or text 988 or chat\u00a0<a href=\"http:\/\/988lifeline.org\/\" target=\"_blank\" rel=\"noopener\">988lifeline.org<\/a>. For TTY users: Use your preferred relay service or dial 711 then 988.<\/p>\n<p>Tanzeem Choudhury, Ph.D. is the Roger and Joelle Burnell professor in integrated health and technology and the director of the health tech program at Cornell Tech, and co-founder of two mental health AI startups. Dan Adler, Ph.D., is a postdoctoral associate at Cornell Tech and an incoming assistant professor in computer science and engineering at the University of Michigan.<\/p>\n","protected":false},"excerpt":{"rendered":"Mental health treatment is expensive and hard to find, so it\u2019s no surprise that people looking for empathy&hellip;\n","protected":false},"author":2,"featured_media":360237,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4316],"tags":[1942,105,78680,4348,218,16,15],"class_list":{"0":"post-360236","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-artificial-intelligence","9":"tag-health","10":"tag-health-tech","11":"tag-healthcare","12":"tag-mental-health","13":"tag-uk","14":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115062846520339752","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/360236","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=360236"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/360236\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/360237"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=360236"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=360236"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=360236"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}