{"id":43946,"date":"2025-09-04T21:13:12","date_gmt":"2025-09-04T21:13:12","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/43946\/"},"modified":"2025-09-04T21:13:12","modified_gmt":"2025-09-04T21:13:12","slug":"should-ai-get-legal-rights","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/43946\/","title":{"rendered":"Should AI Get Legal Rights?"},"content":{"rendered":"<p class=\"paywall\">In <a data-offer-url=\"https:\/\/eleosai.org\/papers\/20250127_Key_Concepts_and_Current_Views_on_AI_Welfare.pdf\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/eleosai.org\/papers\/20250127_Key_Concepts_and_Current_Views_on_AI_Welfare.pdf&quot;}\" href=\"https:\/\/eleosai.org\/papers\/20250127_Key_Concepts_and_Current_Views_on_AI_Welfare.pdf\" rel=\"nofollow noopener\" target=\"_blank\">one paper<\/a> Eleos AI published, the nonprofit argues for evaluating AI consciousness using a \u201ccomputational functionalism\u201d approach. A similar idea was once championed by none other than Putnam, though he <a data-offer-url=\"https:\/\/www.nytimes.com\/2016\/03\/18\/arts\/hilary-putnam-giant-of-modern-philosophy-dies-at-89.html#:~:text=Hilary%20Putnam%2C%20a%20Harvard%20philosopher,willingness%20to%20change%20his%20mind.\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.nytimes.com\/2016\/03\/18\/arts\/hilary-putnam-giant-of-modern-philosophy-dies-at-89.html#:~:text=Hilary%20Putnam%2C%20a%20Harvard%20philosopher,willingness%20to%20change%20his%20mind.&quot;}\" href=\"https:\/\/www.nytimes.com\/2016\/03\/18\/arts\/hilary-putnam-giant-of-modern-philosophy-dies-at-89.html#:~:text=Hilary%20Putnam%2C%20a%20Harvard%20philosopher,willingness%20to%20change%20his%20mind.\" rel=\"nofollow noopener\" target=\"_blank\">criticized<\/a> it later in his career. The <a href=\"https:\/\/plato.stanford.edu\/entries\/computational-mind\/\" rel=\"nofollow noopener\" target=\"_blank\">theory suggests<\/a> that human minds can be thought of as specific kinds of computational systems. From there, you can then figure out if other computational systems, such as a chabot, have indicators of sentience similar to those of a human.<\/p>\n<p class=\"paywall\">Eleos AI said in the paper that \u201ca major challenge in applying\u201d this approach \u201cis that it involves significant judgment calls, both in formulating the indicators and in evaluating their presence or absence in AI systems.\u201d<\/p>\n<p class=\"paywall\">Model welfare is, of course, a nascent and still evolving field. It\u2019s got plenty of critics, including Mustafa Suleyman, the CEO of Microsoft AI, who recently <a data-offer-url=\"https:\/\/mustafa-suleyman.ai\/seemingly-conscious-ai-is-coming\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/mustafa-suleyman.ai\/seemingly-conscious-ai-is-coming&quot;}\" href=\"https:\/\/mustafa-suleyman.ai\/seemingly-conscious-ai-is-coming\" rel=\"nofollow noopener\" target=\"_blank\">published a blog<\/a> about \u201cseemingly conscious AI.\u201d<\/p>\n<p class=\"paywall\">\u201cThis is both premature, and frankly dangerous,\u201d Suleyman wrote, referring generally to the field of model welfare research. \u201cAll of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.\u201d<\/p>\n<p class=\"paywall\">Suleyman wrote that \u201cthere is zero evidence\u201d today that conscious AI exists. He included a link to a <a data-offer-url=\"https:\/\/arxiv.org\/pdf\/2308.08708\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/arxiv.org\/pdf\/2308.08708&quot;}\" href=\"https:\/\/arxiv.org\/pdf\/2308.08708\" rel=\"nofollow noopener\" target=\"_blank\">paper<\/a> that Long coauthored in 2023 that proposed a new framework for evaluating whether an AI system has \u201cindicator properties\u201d of consciousness. (Suleyman did not respond to a request for comment from WIRED.)<\/p>\n<p class=\"paywall\">I chatted with Long and Campbell shortly after Suleyman published his blog. They told me that, while they agreed with much of what he said, they don\u2019t believe model welfare research should cease to exist. Rather, they argue that the harms Suleyman referenced are the exact reasons why they want to study the topic in the first place.<\/p>\n<p class=\"paywall\">\u201cWhen you have a big, confusing problem or question, the one way to guarantee you&#8217;re not going to solve it is to throw your hands up and be like \u2018Oh wow, this is too complicated,\u2019\u201d Campbell says. \u201cI think we should at least try.\u201d<\/p>\n<p>Testing Consciousness<\/p>\n<p class=\"paywall\">Model welfare researchers primarily concern themselves with questions of consciousness. If we can prove that you and I are conscious, they argue, then the same logic could be applied to large language models. To be clear, neither Long nor Campbell think that AI is conscious today, and they also aren\u2019t sure it ever will be. But they want to develop tests that would allow us to prove it.<\/p>\n<p class=\"paywall\">\u201cThe delusions are from people who are concerned with the actual question, \u2018Is this AI, conscious?\u2019 and having a scientific framework for thinking about that, I think, is just robustly good,\u201d Long says.<\/p>\n<p class=\"paywall\">But in a world where AI research can be packaged into sensational headlines and social media videos, heady philosophical questions and mind-bending experiments can easily be misconstrued. Take what happened when Anthropic published a <a data-offer-url=\"https:\/\/www-cdn.anthropic.com\/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www-cdn.anthropic.com\/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf&quot;}\" href=\"https:\/\/www-cdn.anthropic.com\/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf\" rel=\"nofollow noopener\" target=\"_blank\">safety report<\/a> that showed Claude Opus 4 may take \u201charmful actions\u201d in extreme circumstances, like blackmailing a fictional engineer to prevent it from being shut off.<\/p>\n","protected":false},"excerpt":{"rendered":"In one paper Eleos AI published, the nonprofit argues for evaluating AI consciousness using a \u201ccomputational functionalism\u201d approach.&hellip;\n","protected":false},"author":2,"featured_media":43947,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,6006,289,290,33096,18,19,17,33095,307,4670,82],"class_list":{"0":"post-43946","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-anthropic","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-consciousness","13":"tag-eire","14":"tag-ie","15":"tag-ireland","16":"tag-model-behavior","17":"tag-openai","18":"tag-silicon-valley","19":"tag-technology"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/43946","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=43946"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/43946\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/43947"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=43946"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=43946"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=43946"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}