{"id":31923,"date":"2026-05-08T05:58:08","date_gmt":"2026-05-08T05:58:08","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/31923\/"},"modified":"2026-05-08T05:58:08","modified_gmt":"2026-05-08T05:58:08","slug":"openai-launches-trusted-contact-feature-amid-lawsuits-over-chatgpt-self-harm-conversations","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/31923\/","title":{"rendered":"OpenAI launches Trusted Contact feature amid lawsuits over ChatGPT self-harm conversations"},"content":{"rendered":"<p>OpenAI, on Thursday, announced a new feature called \u201cTrusted Contact.\u201d This is an optional safety feature designed to help in situations where a user talks about self-harm during conversations with ChatGPT. If the system detects signs of possible self-harm, it can alert a trusted person chosen by the user, such as a family member or friend, so they can check in and offer support. The feature comes at a time when OpenAI is facing several lawsuits from families who claim their loved ones became influenced by conversations with ChatGPT before dying by suicide.<\/p>\n<p>According to these lawsuits, the families allege that the chatbot sometimes responded in ways that appeared to encourage harmful thoughts or failed to stop dangerous conversations. In some cases, they claim the chatbot even discussed or helped plan self-harm methods. However, the courts have not yet determined whether OpenAI is legally responsible.<\/p>\n<p>How the Trusted Contact feature works<\/p>\n<p>If ChatGPT\u2019s monitoring systems detect that a user may be discussing self-harm in a serious or dangerous way, the user will first be informed that their chosen \u201cTrusted Contact\u201d could be notified. After that, a specially trained human review team checks the conversation to decide whether the situation appears genuinely concerning.<\/p>\n<p>If the reviewers believe there is a serious safety risk, ChatGPT can send a short alert to the user\u2019s trusted contact through email, text message, or the ChatGPT app. OpenAI said the alert will only state that the user may be going through a mental health crisis or discussing self-harm in a concerning way. It will not share the user\u2019s private chats or conversation details.<\/p>\n<p>The notification will also include guidance on how the trusted person can safely and sensitively reach out to help. OpenAI also said it aims to complete these reviews and send any needed notifications within one hour.<br \/>\u201cWhile these serious safety situations are rare, when they do arise, our systems are designed to support timely review and response,\u201d OpenAI said.<\/p>\n<p>Expansion of earlier safety systems<\/p>\n<p>The new \u201cTrusted Contact\u201d feature is based on an earlier safety system that already allowed parents or guardians to receive alerts if a linked teenage user showed signs of serious emotional distress. Now, OpenAI is expanding that idea so adult users over 18 can also choose a trusted person \u2014 such as a friend, family member, or caregiver \u2014 to receive safety alerts if they appear to be in crisis.<\/p>\n<p>\u201cWe will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress,\u201d OpenAI said.<\/p>\n<p>&#8211; Ends<\/p>\n<p>Published On: <\/p>\n<p>May 8, 2026 10:49 IST<\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI, on Thursday, announced a new feature called \u201cTrusted Contact.\u201d This is an optional safety feature designed to&hellip;\n","protected":false},"author":2,"featured_media":31924,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[4989,580,20480,20436,51,157,20481,370,20479,20478],"class_list":{"0":"post-31923","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-openai","8":"tag-ai-safety","9":"tag-chatgpt","10":"tag-chatgpt-lawsuits","11":"tag-chatgpt-safety-feature","12":"tag-mental-health","13":"tag-openai","14":"tag-openai-lawsuits","15":"tag-sam-altman","16":"tag-self-harm-detection","17":"tag-trusted-contact"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/31923","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=31923"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/31923\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/31924"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=31923"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=31923"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=31923"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}