{"id":20248,"date":"2026-04-28T15:42:09","date_gmt":"2026-04-28T15:42:09","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/20248\/"},"modified":"2026-04-28T15:42:09","modified_gmt":"2026-04-28T15:42:09","slug":"google-deepmind-paper-argues-llms-will-never-gain-consciousness-2","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/20248\/","title":{"rendered":"Google DeepMind Paper Argues LLMs Will Never Gain Consciousness"},"content":{"rendered":"<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Corporate AI promises clash with internal research when <a href=\"https:\/\/deepmind.google\/research\/publications\/231971\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Google DeepMind;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;Google DeepMind&quot;}\" class=\"link \">Google DeepMind<\/a> publishes a paper arguing consciousness is impossible for LLMs\u2014directly contradicting CEO Demis Hassabis\u2019s claims about imminent artificial general intelligence.<\/p>\n<p>The Abstraction Fallacy<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">DeepMind scientist Alexander Lerchner argues <a href=\"https:\/\/www.gadgetreview.com\/ai-powered-websites-you-didnt-know-can-supercharge-your-productivity\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:AI systems;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;AI systems&quot;}\" class=\"link \">AI systems<\/a> can only simulate consciousness, never achieve it.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Lerchner\u2019s March 2026 paper challenges the tech industry\u2019s core assumption: that sufficiently complex computation equals consciousness. His \u201cabstraction fallacy\u201d concept cuts through the hype\u2014just because AI systems manipulate language and symbols convincingly doesn\u2019t mean they experience anything internally. Think of it like a perfect celebrity impersonator versus the actual celebrity. The performance might fool you, but there\u2019s no genuine person behind the act.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">The key mechanism he identifies is \u201cmapmaker dependency.\u201d Every AI system requires humans to organize messy reality into categories the machine can process. Those armies of workers labeling training images? They\u2019re creating the meaning that LLMs appear to generate independently.<\/p>\n<p>The Body Problem<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Consciousness requires physical motivation that digital systems fundamentally lack.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">According to Lerchner, consciousness demands embodiment with intrinsic drives rooted in biological necessity. As evolutionary systems biologist <a href=\"https:\/\/www.404media.co\/google-deepmind-paper-argues-llms-will-never-be-conscious\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Johannes J\u00e4ger;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;Johannes J\u00e4ger&quot;}\" class=\"link \">Johannes J\u00e4ger<\/a> puts it: \u201cYou have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that.\u201d<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\"><a href=\"https:\/\/tech.yahoo.com\/articles\/man-uses-chatgpt-design-cancer-142858350.html\" data-ylk=\"slk:LLMs;elm:context_link;itc:0;sec:content-canvas;outcm:mb_qualified_link;_E:mb_qualified_link;ct:story;\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;LLMs&quot;}\" class=\"link  yahoo-link\" rel=\"nofollow noopener\" target=\"_blank\">LLMs<\/a> exist as \u201cpatterns on a hard drive\u201d that activate only when prompted, lacking any internal motivation or meaning beyond human-defined tasks. This distinction between simulation and instantiation\u2014like an <a href=\"https:\/\/manlius.substack.com\/p\/can-ai-simulate-consciousness-a-study\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:artificial heart;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;artificial heart&quot;}\" class=\"link \">artificial heart<\/a> pumping blood versus performing actual metabolic functions\u2014suggests AGI without consciousness remains merely a sophisticated tool.<\/p>\n<p>Academic D\u00e9j\u00e0 Vu<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Philosophy professors note these arguments aren\u2019t exactly breaking new ground.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">Leading consciousness researchers acknowledge Lerchner\u2019s rigor while emphasizing the familiar territory. <a href=\"https:\/\/www.404media.co\/google-deepmind-paper-argues-llms-will-never-be-conscious\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Mark Bishop;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;Mark Bishop&quot;}\" class=\"link \">Mark Bishop<\/a> from Goldsmiths University supports \u201c99 percent\u201d of the arguments but notes \u201call these arguments have been presented years and years ago.\u201d The surprise isn\u2019t the conclusion\u2014it\u2019s that Google permitted publication contradicting its own AGI marketing narrative.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">This creates a credibility paradox for you evaluating AI company claims. When internal researchers publish conclusions undermining corporate AGI promises, it reveals the gap between <a href=\"https:\/\/www.gadgetreview.com\/chatgpts-mysterious-name-block\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:marketing narratives;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;marketing narratives&quot;}\" class=\"link \">marketing narratives<\/a> and scientific findings. You\u2019re left questioning whether these mixed signals reflect genuine uncertainty or strategic positioning in the consciousness debate.<\/p>\n<p class=\"col-body mb-4 leading-7 text-[18px] md:leading-8 break-words min-w-0 charcoal-color\">From the coolest cars to the must-have gadgets, GadgetReview\u2019s daily newsletter keeps you in the know. <a href=\"https:\/\/magic.beehiiv.com\/v1\/498b1753-f498-481f-9697-76102b6b7e5c?email=%7B%7Bemail%7D%7D\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Subscribe - it\u2019s fun, fast, and free;elm:context_link;itc:0;sec:content-canvas\" data-yga=\"{&quot;yLinkElement&quot;:&quot;context_link&quot;,&quot;yModuleName&quot;:&quot;content-canvas&quot;,&quot;yLinkText&quot;:&quot;Subscribe - it\u2019s fun, fast, and free&quot;}\" class=\"link \">Subscribe &#8211; it\u2019s fun, fast, and free<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"Corporate AI promises clash with internal research when Google DeepMind publishes a paper arguing consciousness is impossible for&hellip;\n","protected":false},"author":2,"featured_media":20249,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[6744,8449,3013,14302,14300,7543,14301,2225,14303],"class_list":{"0":"post-20248","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agi","8":"tag-agi","9":"tag-alexander-lerchner","10":"tag-artificial-general-intelligence","11":"tag-celebrity-impersonator","12":"tag-consciousness","13":"tag-google-deepmind","14":"tag-internal-research","15":"tag-llms","16":"tag-motivation"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/20248","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=20248"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/20248\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/20249"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=20248"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=20248"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=20248"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}