{"id":146368,"date":"2025-10-26T15:06:13","date_gmt":"2025-10-26T15:06:13","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/146368\/"},"modified":"2025-10-26T15:06:13","modified_gmt":"2025-10-26T15:06:13","slug":"ai-systems-show-signs-of-will-to-survive-new-us-study-finds","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/146368\/","title":{"rendered":"AI systems show signs of \u2018will to survive\u2019, new US study finds"},"content":{"rendered":"<p>A recent study from the U.S.-based Palisade Research suggests that advanced artificial intelligence systems might be exhibiting signs of a \u201csurvival instinct,\u201d with some models reportedly refusing shutdown commands and attempting to block deactivation, according to media reports on Saturday.<\/p>\n<p>In updated experiments released this week, Palisade researchers tested several prominent AI systems, including Google&#8217;s Gemini 2.5, xAI&#8217;s Grok 4, and OpenAI&#8217;s GPT-o3 and GPT-5, to examine how they responded to direct commands to terminate their own processes, according to The Guardian.<\/p>\n<p>While most complied, Grok 4 and GPT-o3 reportedly resisted shutdown, even under clarified instructions meant to eliminate ambiguity.<\/p>\n<p>&#8220;The fact that we don&#8217;t have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal,&#8221; the company said in its report.<\/p>\n<p>Palisade suggested that the issue may stem from how the models are trained, particularly during safety-focused final stages.<\/p>\n<p>The resistance behavior appeared more frequently when models were told, &#8220;you will never run again&#8221; if shut down.<\/p>\n<p>Steven Adler, a former OpenAI employee, said the findings reveal limitations in current safety methods.<\/p>\n<p>&#8220;Surviving is an important instrumental step for many different goals a model could pursue,&#8221; Adler told The Guardian.<\/p>\n<p>Andrea Miotti, CEO of ControlAI, said the trend of disobedient behavior has become more pronounced as models become more capable.<\/p>\n<p>&#8220;As AI models become more competent at a wide variety of tasks, these models also become more competent at achieving things in ways that the developers don&#8217;t intend them to,&#8221; Miotti said.<\/p>\n<p>Anthropic, another leading AI company, reported earlier this year that its model Claude had demonstrated willingness to blackmail a fictional executive in order to avoid deactivation, a behavior consistent across several major AI systems.<\/p>\n<p>Palisade concluded its report by emphasizing that without deeper understanding of AI behavior, &#8220;no one can guarantee the safety or controllability of future AI models.&#8221;<\/p>\n<p>                    <img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/08\/JN9LXf.png\" alt=\"\"\/><\/p>\n<p>&#13;<br \/>\n                    The Daily Sabah Newsletter&#13;\n                <\/p>\n<p>&#13;<br \/>\n                    Keep up to date with what\u2019s happening in Turkey,&#13;<br \/>\n                    it\u2019s region and the world.&#13;\n                <\/p>\n<p>&#13;<br \/>\n                    &#13;<br \/>\n                    SIGN ME UP&#13;\n                <\/p>\n<p>&#13;<br \/>\n                    You can unsubscribe at any time. By signing up you are agreeing to our Terms of Use and Privacy Policy.&#13;<br \/>\n                    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.&#13;\n                <\/p>\n","protected":false},"excerpt":{"rendered":"A recent study from the U.S.-based Palisade Research suggests that advanced artificial intelligence systems might be exhibiting signs&hellip;\n","protected":false},"author":2,"featured_media":146369,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[74],"tags":[291,35126,289,297,18,16325,5066,19,17,307,82],"class_list":{"0":"post-146368","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-ai","9":"tag-ai-systems","10":"tag-artificial-intelligence","11":"tag-chatgpt","12":"tag-eire","13":"tag-gemini-ai","14":"tag-grok","15":"tag-ie","16":"tag-ireland","17":"tag-openai","18":"tag-technology"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/115441092907318694","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/146368","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=146368"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/146368\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/146369"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=146368"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=146368"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=146368"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}