{"id":4792,"date":"2025-08-17T11:36:11","date_gmt":"2025-08-17T11:36:11","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/4792\/"},"modified":"2025-08-17T11:36:11","modified_gmt":"2025-08-17T11:36:11","slug":"anthropic-gives-claude-ai-power-to-end-conversations-as-part-of-model-welfare-push","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/4792\/","title":{"rendered":"Anthropic gives Claude AI power to end conversations as part of &#8216;model welfare&#8217; push"},"content":{"rendered":"<p>In the fast-moving world of artificial intelligence, there is almost always some new feature or model being launched every single day. But one feature that no one saw coming is from Anthropic, the maker of the popular AI chatbot Claude. The AI startup is now giving some of its models the ability to end conversations on Claude as part of its exploratory work on \u201cmodel welfare.\u201d<\/p>\n<p>\u201cThis is an experimental feature, intended only for use by <a class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/gadgets-and-appliances\/anthropic-upgrades-claude-with-a-memory-recall-feature-to-enhance-workflow-and-creativity-11755084840347.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\" data-vars-anchor-text=\"Claude\" rel=\"nofollow noopener\">Claude<\/a> as a last resort in extreme cases of persistently harmful and abusive conversations,\u201d the company states.<\/p>\n<p>Anthropic says that the vast majority of users will never experience Claude ending a conversation on its own.<\/p>\n<p>Moreover, the company adds that Claude\u2019s conversation-ending ability is a last resort when multiple attempts at redirection have failed and \u201chope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.\u201d<\/p>\n<p>\u201cThe scenarios where this will occur are extreme edge cases\u2014the vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude,\u201d Anthropic adds.<\/p>\n<p>Why is Anthropic adding conversation-ending ability to Claude?<\/p>\n<p><a class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/gadgets-and-appliances\/anthropic-upgrades-claude-with-a-memory-recall-feature-to-enhance-workflow-and-creativity-11755084840347.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\" data-vars-anchor-text=\"Anthropic\" rel=\"nofollow noopener\">Anthropic<\/a> says that the moral status of Claude or other large language models (LLMs) remains highly uncertain, meaning there is no clarity yet on whether these AI systems could ever feel anything like pain, distress, or well-being.<\/p>\n<p>However, the AI startup is taking this possibility seriously and believes it\u2019s important to investigate. In the meantime, the company is also looking at \u201clow-cost interventions\u201d which don\u2019t cost much but could potentially reduce harm to AI systems\u2014allowing the LLM to end the conversation is one such method.<\/p>\n<p>Anthropic says it tested Claude Opus 4 before its release, and part of that testing included a \u201cmodel welfare assessment.\u201d The company found that Claude consistently rejected requests where there was a possibility of harm.<\/p>\n<p>When users kept pushing for dangerous or abusive content even after refusals, the AI model\u2019s responses started to appear stressed or uncomfortable. Some of the requests where Claude showed signs of \u201cdistress\u201d included generating sexual content involving minors or attempts to solicit information that could enable large-scale violence or acts of terror.<\/p>\n","protected":false},"excerpt":{"rendered":"In the fast-moving world of artificial intelligence, there is almost always some new feature or model being launched&hellip;\n","protected":false},"author":2,"featured_media":4793,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,5100,289,290,5101,5099,18,19,17,82],"class_list":{"0":"post-4792","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-anthopic","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-claude","13":"tag-claude-ai","14":"tag-eire","15":"tag-ie","16":"tag-ireland","17":"tag-technology"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/4792","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=4792"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/4792\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/4793"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=4792"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=4792"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=4792"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}