{"id":19925,"date":"2026-04-28T11:30:13","date_gmt":"2026-04-28T11:30:13","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/19925\/"},"modified":"2026-04-28T11:30:13","modified_gmt":"2026-04-28T11:30:13","slug":"google-deepmind-scientist-says-llms-will-never-be-conscious-but-why","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/19925\/","title":{"rendered":"Google DeepMind scientist says LLMs will never be conscious, but why"},"content":{"rendered":"<p>As AI moves from narrow intelligence (ANI) toward artificial general intelligence (AGI), the idea of machines that can feel and become self-aware is increasingly gaining traction. With improvements, AI systems have now become more capable and have started mimicking human-like responses. Advanced large language models, systems that power AI chatbots, can now write, reason, and interact in ways that appear just like humans. Many have started thinking that with improvements in AI systems, we can jump from intelligence to consciousness\u2014an AI system that can feel and is aware of its surroundings. But a new paper from Google DeepMind argues that future may never arrive.<\/p>\n<p>Alexander Lerchner, who works as a senior staff scientist at Google\u2019s artificial intelligence laboratory DeepMind, claims that large language models (LLMs), despite their capabilities, are unlikely to ever become conscious.<\/p>\n<p>The research paper, titled \u201cThe Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,\u201d argues that AI systems are ultimately \u201cmapmaker-dependent,\u201d meaning they require an active, experiencing cognitive agent\u2014a human\u2014to organise continuous reality into meaningful states.<\/p>\n<p>  Why AI still depends on humans<\/p>\n<p>In simple terms, AI needs humans to organise data in a way it can learn from. It is designed to process and predict patterns in data to generate its responses and cannot think on its own.<\/p>\n<p>Being able to simulate conversation or reasoning is not the same as actually experiencing thoughts or feelings, and Lerchner argues this would be impossible without a physical body. The systems do not know what they are saying\u2014they only predict what comes next.<\/p>\n<p>A growing divide in belief<\/p>\n<p>This comes at a time when interest in AI consciousness is increasing due to rapid advancements in the technology. Researchers and users have started feeling that someday these systems might gain true consciousness.<br \/>This has divided people into two groups: those who see consciousness as a possible outcome of advanced intelligence, and those who argue that the two are fundamentally different.<\/p>\n<p>Why this debate matters<\/p>\n<p>The question of AI consciousness has broader implications for society. If a system ever gains consciousness, it could change how it is regulated, used, and even treated.<\/p>\n<p>But if it remains non-conscious, it will continue to be viewed as a tool rather than a system that can feel or be aware of how it is treated.<\/p>\n<p>For now, the Google DeepMind paper suggests that with advancement, AI systems may feel more and more human-like\u2014but they will not be human-like from the inside.<\/p>\n<p>&#8211; Ends<\/p>\n<p>Published On: <\/p>\n<p>Apr 28, 2026 16:45 IST<\/p>\n","protected":false},"excerpt":{"rendered":"As AI moves from narrow intelligence (ANI) toward artificial general intelligence (AGI), the idea of machines that can&hellip;\n","protected":false},"author":2,"featured_media":19926,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[14110,14112,8449,14111,14113,5044,132,7543,1642],"class_list":{"0":"post-19925","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-google","8":"tag-ai-consciousness","9":"tag-ai-debate","10":"tag-alexander-lerchner","11":"tag-ani-vs-agi","12":"tag-artificial-intelligence-future","13":"tag-deepmind","14":"tag-google","15":"tag-google-deepmind","16":"tag-large-language-models"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/19925","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=19925"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/19925\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/19926"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=19925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=19925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=19925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}