{"id":24683,"date":"2025-06-29T14:21:09","date_gmt":"2025-06-29T14:21:09","guid":{"rendered":"https:\/\/www.europesays.com\/us\/24683\/"},"modified":"2025-06-29T14:21:09","modified_gmt":"2025-06-29T14:21:09","slug":"do-ai-systems-need-bodies-for-true-intelligence","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/24683\/","title":{"rendered":"Do AI systems need bodies for true intelligence"},"content":{"rendered":"<p>The first robot I remember is Rosie from The Jetsons, soon followed by the urbane C-3PO and his faithful sidekick R2-D2 in The Empire Strikes Back. But my first disembodied AI was Joshua, the computer in WarGames who tried to start a nuclear war \u2013 until it learned about mutually assured destruction and chose to play chess instead.<\/p>\n<p>At age seven, this changed me. Could a machine understand ethics? Emotion? Humanity? Did artificial intelligence need a body? These fascinations deepened as the complexity of non-human intelligence did with characters like the android Bishop in Aliens, Data in Star Trek: TNG, and more recently with Samantha in Her, or Ava in Ex Machina.<\/p>\n<p>But these aren\u2019t just speculative questions anymore. Roboticists today are wrestling with the question of whether artificial intelligence needs a body? And if so, what kind?<\/p>\n<p>And then there\u2019s the \u201chow\u201d of it all; if embodied intelligence is the way forward to true artificial general intelligence (AGI), could soft robots be the key to that next step?<\/p>\n<p>The limits of disembodied AI<\/p>\n<p>Recent papers are beginning to show the cracks in today\u2019s most advanced (and notably disembodied) AI systems. A <a class=\"Link\" href=\"https:\/\/machinelearning.apple.com\/research\/illusion-of-thinking?utm_source=chatgpt.com\" target=\"_blank\" data-cms-ai=\"0\" rel=\"nofollow noopener\">new study from Apple<\/a> examined so-called \u201cLarge Reasoning Models\u201d (LRMs) \u2013 language models that generate reasoning steps before answering. These systems, the paper notes, perform better than standard LLMs on many tasks, but fall apart when problems get too complex. Strikingly, they don\u2019t just plateau \u2013 they collapse, even when given more than enough computing power.<\/p>\n<p>Worse, they fail to reason consistently or algorithmically. Their \u201creasoning traces\u201d \u2013 how they work through problems \u2013 lack internal logic. And the more complex the challenge, the less effort the models seem to expend. These systems, the authors conclude, don\u2019t really \u201cthink\u201d the way humans do.<\/p>\n<p>\u201cWhat we are building now are things that take in words and predict the next most likely word &#8230; That\u2019s very different from what you and I do,\u201d Nick Frosst, a former Google researcher and co-founder of Cohere, told<a class=\"Link\" href=\"https:\/\/www.nytimes.com\/2025\/05\/16\/technology\/what-is-agi.html\" target=\"_blank\" data-cms-ai=\"0\" rel=\"nofollow noopener\"> The New York Times<\/a>.<\/p>\n<p>Cognition is more than just computation<\/p>\n<p>How did we get here? For much of the 20th century, artificial intelligence followed a model called GOFAI \u2013 \u201cGood Old-Fashioned Artificial Intelligence\u201d \u2013 which treated cognition as symbolic logic. Early AI researchers believed intelligence could be built by processing symbols, much like a computer executes code. Abstract, symbol-based thinking certainly doesn\u2019t need a body to advance.<\/p>\n<p>This idea began to fray when early robot AI failed to handle messy, real-world conditions. Researchers in psychology, neuroscience, and philosophy began to ask a different question, rooted in greater understandings that came from studies of animal and plant intelligences which all adapt, learn and respond to complex environmental conditions. These organisms learn through physical interactions, not symbolic ideas.<\/p>\n<p>In humans, the enteric nervous system that governs our gut is often called the \u201csecond brain\u201d because it uses the same types of cells and chemicals the brain uses to help us digest \u2013 also, incidentally, these are the same components an octopus tentacle uses to sense and react locally, within the limb.<\/p>\n<p>This all raises the question, what if the foundation of adaptable intelligence is that it\u2019s distributed throughout an organism and doesn\u2019t live only in the brain, disconnected from the physical world?<\/p>\n<p>This is the central idea of embodied cognition. Acting, sensing, and thinking are not separate \u2013 they are one process. As Rolf Pfeifer, the Director of the University of Zurich\u2019s Artificial Intelligence Laboratory <a class=\"Link\" href=\"https:\/\/www.embopress.org\/doi\/full\/10.1038\/embor.2012.170\" target=\"_blank\" data-cms-ai=\"0\" rel=\"nofollow noopener\">told EMBO Reports<\/a>: \u201cBrains have always developed in the context of a body that interacts with the world to survive. There is no algorithmic ether in which brains arise.\u201d<\/p>\n<p>Embodied intelligence: A different kind of thinking<\/p>\n<p>So we may need smarter bodies to go along with the AI, and Cecilia Laschi, a pioneer in <a class=\"Link\" href=\"https:\/\/newatlas.com\/tag\/soft-robotics\/\" data-cms-ai=\"0\" rel=\"nofollow noopener\" target=\"_blank\">soft robotics<\/a>, thinks smarter equals softer. After years working with rigid humanoid robots in Japan, she shifted her research to soft-bodied machines, inspired by the octopus \u2013 an animal that has no skeleton and whose limbs think for themselves.<\/p>\n<p>\u201cIf you have a human robot walking you control all different movements,\u201d she says in an interview with New Atlas. \u201cIf there is something different on the terrain, you have to reprogram a little bit.\u201d<\/p>\n<p>But animals don\u2019t need to re-think and plan out their walking motions. \u201cOur knee is compliant,\u201d she explains. \u201cWe compensate for uneven ground mechanically, without using the brain.\u201d This is embodied intelligence \u2013 the idea that some elements of cognition can be outsourced to the body.<\/p>\n<p>Embodied intelligence has clear advantages from an engineering perspective; offloading perception, control, and decision-making to a robot\u2019s physical structure means reduced computational demands in the main robot brain, leading to machines that can function more effectively in unpredictable environments.<\/p>\n<p>In a May <a class=\"Link\" href=\"https:\/\/www.science.org\/doi\/10.1126\/scirobotics.adx2731\" target=\"_blank\" data-cms-ai=\"0\" rel=\"nofollow noopener\">special issue of Science Robotics<\/a>, Laschi defines it like this: \u201cMotor control is not entirely managed by the computing system \u2026 motor behavior is partially shaped mechanically by external forces acting on the body.\u201d Behavior is shaped by environment, and intelligence is learned through experience, not pre-programmed into software.<\/p>\n<p>Thinking in this way, intelligence isn\u2019t just about faster chips or bigger models \u2013 it\u2019s about interaction. Key to advancing this intelligence is the field of soft robotics, which uses materials like silicone or special fabrics to enable more flexible robot bodies. These bodies are adaptive, fluid, and capable of real-time learning. A soft robotic arm, like an octopus tentacle, can grasp, explore, and respond without needing to calculate every move.<\/p>\n<p>Flesh and feedback: How to make materials think for themselves<\/p>\n<p>To make soft robotics work as well as a tentacle, though, roboticists have to move away from programming for every possibility and instead design new ways for machines to sense and react. To build machines with lifelike autonomy, researchers are turning to a new concept: autonomous physical intelligence (API).<\/p>\n<p><a class=\"Link\" href=\"https:\/\/samueli.ucla.edu\/people\/ximin-he\/\" target=\"_blank\" data-cms-ai=\"0\" rel=\"nofollow noopener\">Ximin He<\/a>, Associate Professor of Materials Science and Engineering at UCLA, has pioneered work in this space by designing soft materials \u2013 like responsive gels and polymers \u2013 that don\u2019t just react to stimuli but also regulate their own movement using built-in feedback.<\/p>\n<p>\u201cWe\u2019ve been working on creating more decision-making ability at the material level,\u201d He tells New Atlas in an interview. \u201cMaterials that change shape in response to a stimulus can also \u2018decide\u2019 how to modulate that stimulus based on how they deform \u2013 correcting or adjusting their next motion.\u201d<\/p>\n<p>In 2018, He\u2019s lab demonstrated this with a gel that could self-regulate its movement. Since then, they\u2019ve shown that the same principle applies to a range of soft materials, including liquid crystal elastomers that work efficiently in air.<\/p>\n<p>The key to API is nonlinear time-lag feedback. In traditional robots, a control system analyzes sensory data and tells the machine what to do. He\u2019s approach embeds this logic directly in the materials themselves.<\/p>\n<p>\u201cIn robotics, you need sensing and actuation \u2013 but also decision-making between them,\u201d He explains. \u201cThat\u2019s what we\u2019re embedding physically, using feedback loops.\u201d<\/p>\n<p>He compares this to biological systems. Negative feedback, like our glucose regulation or a thermostat, works to correct overshoots. Positive feedback amplifies change. Nonlinear feedback combines both, allowing for controlled, rhythmic behaviors \u2013 like a pendulum or walking gait.<\/p>\n<p>\u201cA lot of natural motion \u2013 like walking or swimming \u2013 relies on periodic, steady movement,\u201d He says. \u201cWith nonlinear, time-lagged feedback, we can design soft robots to move forward, backward, forward again \u2013 without needing external control at each step.\u201d<\/p>\n<p>This represents a major step forward from soft robots that rely on external stimuli to function, as He and her colleagues shared in <a class=\"Link\" href=\"https:\/\/www.science.org\/doi\/10.1126\/scirobotics.ads1292\" target=\"_blank\" data-cms-ai=\"0\" rel=\"nofollow noopener\">a recent review paper<\/a>. By integrating sensing, control, and actuation into the material itself, researchers like He are paving the way toward machines that don\u2019t just react, but decide, adapt, and act on their own.<\/p>\n<p>The future is soft (and smart)<\/p>\n<p>Soft robotics is a nascent field, but it holds profound promise. Laschi points to surgical tools like endoscopes that could examine and react to sensitive human tissue at the same time, or rehab devices that could flex or adapt to the patient\u2019s needs, as early and obvious uses.<\/p>\n<p>So, to move from AI to AGI, machines may need bodies \u2013 specifically, soft and adaptable ones. Most life on Earth, including humans, learns by moving, touching, failing, and adjusting. We know how to deal with an unpredictable, chaotic world \u2013 something today\u2019s AIs still struggle with. We know what an apple is not because we read a definition, but because we\u2019ve held one, tasted one, dropped it, bruised it, cut it, squeezed it, watched it rot.<\/p>\n<p>That kind of knowledge \u2013 tacit, sensory, contextual \u2013 is hard to teach a model that\u2019s only ever seen text or pixels. Direct connection to, and feedback with, the real world gets around the <a class=\"Link\" href=\"https:\/\/newatlas.com\/ai-humanoids\/ai-rl-human-thinking\/\" target=\"_blank\" data-cms-ai=\"0\" rel=\"nofollow noopener\">limitations of language that LLMs currently deal with<\/a>, and offers the potential for AI to build a different understanding of the world. A conception of the world from its own POV, not a human one, but something different. If a soft robot was provided with different kinds of sensory inputs (think infrared vision, low-frequency hearing, or the ability to <a class=\"Link\" href=\"https:\/\/www.myovariancancerteam.com\/resources\/does-ovarian-cancer-have-a-smell\" target=\"_blank\" data-cms-ai=\"0\" rel=\"nofollow noopener\">smell cancer<\/a> or other diseases), it could even develop an alternative (and possibly super-useful) understanding of life on Earth.<\/p>\n<p>\u201cIf you want to develop something like human intelligence in a machine, the machine has to be able to acquire its own experiences,\u201d Giulio Sandini, a Professor of Bioengineering at the University of Genoa, Italy told EMBO Reports. You must let it learn from experience, as children do. And that likely requires a body.<\/p>\n","protected":false},"excerpt":{"rendered":"The first robot I remember is Rosie from The Jetsons, soon followed by the urbane C-3PO and his&hellip;\n","protected":false},"author":3,"featured_media":24684,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[22151,691,22153,20446,22152,22146,738,22145,2454,22143,22147,8523,22150,22148,22144,22149,158,67,132,68],"class_list":{"0":"post-24683","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-adaptive-intelligence","9":"tag-ai","10":"tag-ai-and-robotics","11":"tag-ai-ethics","12":"tag-ai-evolution","13":"tag-ai-systems","14":"tag-artificial-intelligence","15":"tag-autonomous-physical-intelligence","16":"tag-cognition","17":"tag-embodied-cognition","18":"tag-general-intelligence","19":"tag-machine-learning","20":"tag-nonlinear-feedback","21":"tag-roboticists","22":"tag-soft-robotics","23":"tag-soft-robots","24":"tag-technology","25":"tag-united-states","26":"tag-unitedstates","27":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/114767100906516702","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/24683","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=24683"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/24683\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/24684"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=24683"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=24683"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=24683"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}