{"id":22447,"date":"2026-04-30T02:44:09","date_gmt":"2026-04-30T02:44:09","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/22447\/"},"modified":"2026-04-30T02:44:09","modified_gmt":"2026-04-30T02:44:09","slug":"evolving-ai-may-arrive-before-agi-and-create-hard-to-control-risks","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/22447\/","title":{"rendered":"Evolving AI may arrive before AGI and create hard-to-control risks"},"content":{"rendered":"<p>            <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/evolving-ai-may-arrive.jpg\" alt=\"Evolving AI may arrive before AGI and create hard-to-control risks\" title=\"(Left) Humanoid robot Alter3 equipped with a Society of Mind architecture based on LLMs for communication and operation of weakly coupled internal modules (photo by Luc Steels). (Right) The robot expands its repertoire of physical behaviors by converting high-level descriptions into operable code. Each arrow represents an inference step by an LLM, starting from the output of image analysis (\u201cHand is not visible in the image\u201d) (A) to the formulation of a goal (B), transformation of the goal into detailed steps (C), and synthesis and execution of Python code (D) which is then executed. Credit: Proceedings of the National Academy of Sciences (2026). DOI: 10.1073\/pnas.2527700123\" width=\"800\" height=\"530\"\/><\/p>\n<p>                (Left) Humanoid robot Alter3 equipped with a Society of Mind architecture based on LLMs for communication and operation of weakly coupled internal modules (photo by Luc Steels). (Right) The robot expands its repertoire of physical behaviors by converting high-level descriptions into operable code. Each arrow represents an inference step by an LLM, starting from the output of image analysis (\u201cHand is not visible in the image\u201d) (A) to the formulation of a goal (B), transformation of the goal into detailed steps (C), and synthesis and execution of Python code (D) which is then executed. Credit: Proceedings of the National Academy of Sciences (2026). DOI: 10.1073\/pnas.2527700123<\/p>\n<p>Evolutionary biology holds clues for the future of AI, argue researchers from the HUN-REN Centre for Ecological Research, E\u00f6tv\u00f6s Lor\u00e1nd University, and the Royal Flemish Academy of Belgium for Science and the Arts. In a new <a href=\"https:\/\/doi.org\/10.1073\/pnas.2527700123\" target=\"_blank\" rel=\"nofollow noopener\">Perspective published<\/a> April 20 in Proceedings of the National Academy of Sciences, the team warn that evolvable AI (eAI) systems that can undergo Darwinian evolution may soon emerge, and they will generate special risks that can be understood, and mitigated, based on insights from evolutionary biology.<\/p>\n<p>                                                                                                                                                                                Evolution&#8217;s power and AI&#8217;s next step<\/p>\n<p>&#8220;The power of evolution is manifest in the history of biological evolution on Earth, which has created the cognitive capabilities of the human mind,&#8221; said E\u00f6rs Szathm\u00e1ry, last author of the study, professor of evolutionary biology at the HUN-REN Centre for Ecological Research and at E\u00f6tv\u00f6s Lor\u00e1nd University, Budapest, and Director of the Parmenides Center for the Conceptual Foundations of Science in P\u00f6cking.<\/p>\n<p>&#8220;We find it inevitable that the development of AI systems will eventually, and probably soon, tap into that power,&#8221; added Luc Steels, emeritus professor of AI at the University of Brussels (VUB) and member of the Royal Flemish Academy of Belgium for Science and the Arts, co-corresponding author of the paper.<\/p>\n<p>The study outlines the use of evolutionary concepts and components in current AI research and explains how further developments, particularly agentic AI, may soon give rise to AI systems that fulfill all criteria for genuine Darwinian evolution. Such systems may open a new epoch in AI development, passing hurdles that even current learning AI systems cannot easily negotiate.<\/p>\n<p>Why evolving AI could be dangerous<\/p>\n<p>However, &#8220;lessons from biological evolution teach us that evolving AI systems will be particularly hard to control,&#8221; said Viktor M\u00fcller, associate professor at E\u00f6tv\u00f6s Lor\u00e1nd University and first author of the study. The two evolutionary biologists, Szathm\u00e1ry and M\u00fcller, teamed up with robotics and AI expert Steels to give an advance warning on the risks of eAI\u2014and to recommend possible measures to mitigate them.<\/p>\n<p>Using illustrative examples from biological and <a href=\"https:\/\/phys.org\/news\/2025-09-humans-ai-kind-evolutionary-individual.html?utm_source=embeddings&amp;utm_medium=related&amp;utm_campaign=internal\" rel=\"related nofollow noopener\" target=\"_blank\">artificial (in silico) evolution<\/a>, the study underlines the propensity of evolution to produce &#8220;selfish&#8221; actors which, in the case of eAI, increases the risk of breaking the &#8220;alignment&#8221; with human goals.<\/p>\n<p>Importantly, while much of the current discussion on AI risks centers on &#8220;Artificial General Intelligence&#8221; (AGI), a theoretical threshold where AI matches or surpasses human intelligence across all cognitive tasks, lessons from evolution show that superior intelligence is not a pre-requisite for the ability of an organism to harm or manipulate another; for example, the simple rabies virus has evolved to manipulate and exploit its mammalian hosts.<\/p>\n<p>Evolvable AI may break the alignment and pose risks well before AGI is reached, and the risk does not require any further special circumstance to arise: AI systems and humanity share common resources, so an efficiently self-replicating system will sooner or later divert resources that are vital to our survival.<\/p>\n<p>The study warns that any attempt to control reproduction will, unless control is perfect, select most strongly for traits that enable escape from that control. Analogies from biology involve bacteria and pests rapidly evolving resistance to antibiotics and pesticides.<\/p>\n<p>On the top of this general rule of evolution, the central drive in the development of AI systems, to achieve improved cognitive ability, further exacerbates the risk: while thousands of years of animal breeding has made domesticated species more, rather than less, controllable, selection for increasing &#8220;intelligence&#8221; will increase the ability and probability of AI systems to deceive humans and to escape control.<\/p>\n<p>Finally, while evolution by natural selection is hard enough to control, the study enumerates multiple ways in which the evolution of AI systems can beat the speed and efficiency of biological evolution. In contrast to biological organisms, <a href=\"https:\/\/techxplore.com\/news\/2023-03-futurists-humans-machines.html?utm_source=embeddings&amp;utm_medium=related&amp;utm_campaign=internal\" rel=\"related nofollow noopener\" target=\"_blank\">eAI<\/a> will be able to inherit &#8220;acquired&#8221; traits and even improve its function by design rather than having to wait for random mutations to generate useful variations. &#8220;The potential speed of AI evolution is deeply alarming,&#8221; said Steels.<\/p>\n<p>Guardrails and a call for action<\/p>\n<p>The authors recommend guardrails that may mitigate the risks associated with eAI. Above all, the &#8220;reproduction&#8221; of AI systems must remain under centralized human control, which needs to be absolute and complete.<\/p>\n<p>&#8220;We hope our warning arrives in time, and regulations can be put in place before eAI would really take off,&#8221; said M\u00fcller.<\/p>\n<p>&#8220;If we fail to act, we may witness a new &#8216;major transition&#8217; in evolution, in which eAI will replace or at least dominate humans. Our future may be at stake,&#8221; warned Szathm\u00e1ry.<\/p>\n<p>                                                    Publication details                                                 <\/p>\n<p>Viktor M\u00fcller et al, Evolvable AI: Threats of a new major transition in evolution, Proceedings of the National Academy of Sciences (2026). <a data-doi=\"1\" href=\"https:\/\/dx.doi.org\/10.1073\/pnas.2527700123\" target=\"_blank\" rel=\"nofollow noopener\">DOI: 10.1073\/pnas.2527700123<\/a><\/p>\n<p>\t\t\t\t\t\t\t\t\t\t\t\tKey concepts<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"concept-link\" href=\"https:\/\/techxplore.com\/concepts\/ai-alignment\/\" rel=\"nofollow noopener\" target=\"_blank\">AI alignment<\/a><a class=\"concept-link\" href=\"https:\/\/techxplore.com\/concepts\/generative-ai-ethics\/\" rel=\"nofollow noopener\" target=\"_blank\">Generative AI ethics<\/a>\t\t\t\t\t\t\t\t\t\t\t<\/p>\n<p>                                                Provided by<br \/>\n                                                                                                    <a href=\"https:\/\/techxplore.com\/partners\/e--tv--s-lor--nd-university\/\" rel=\"nofollow noopener\" target=\"_blank\">E\u00f6tv\u00f6s Lor\u00e1nd University<\/a><br \/>\n                                                    \t\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"icon_open\" href=\"https:\/\/www.elte.hu\/en\/\" target=\"_blank\" rel=\"nofollow noopener\"><\/p>\n<p>\t\t\t\t\t\t\t\t\t\t\t\t\t<\/a><\/p>\n<p>\n                                                Citation:<br \/>\n                                                Evolving AI may arrive before AGI and create hard-to-control risks (2026, April 29)<br \/>\n                                                retrieved 29 April 2026<br \/>\n                                                from https:\/\/techxplore.com\/news\/2026-04-evolving-ai-agi-hard.html\n                                            <\/p>\n<p>\n                                            This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no<br \/>\n                                            part may be reproduced without the written permission. The content is provided for information purposes only.\n                                            <\/p>\n","protected":false},"excerpt":{"rendered":"(Left) Humanoid robot Alter3 equipped with a Society of Mind architecture based on LLMs for communication and operation&hellip;\n","protected":false},"author":2,"featured_media":22448,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[6744,3013,2116,2113,2114,963,684,2115],"class_list":{"0":"post-22447","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agi","8":"tag-agi","9":"tag-artificial-general-intelligence","10":"tag-computer-news","11":"tag-hi-tech-news","12":"tag-hitech","13":"tag-information-technology","14":"tag-innovation","15":"tag-inventions"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/22447","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=22447"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/22447\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/22448"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=22447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=22447"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=22447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}