{"id":9234,"date":"2026-04-21T02:14:19","date_gmt":"2026-04-21T02:14:19","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/9234\/"},"modified":"2026-04-21T02:14:19","modified_gmt":"2026-04-21T02:14:19","slug":"boston-dynamics-and-google-deepmind-unveil-a-smarter-spot","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/9234\/","title":{"rendered":"Boston Dynamics and Google DeepMind Unveil a Smarter Spot"},"content":{"rendered":"<p>The amazing and frustrating thing about robots is that they can do almost anything you want them to do, as long as you know how to ask properly. In the not-so-distant past, asking properly meant writing code, and while we\u2019ve thankfully moved beyond that brittle constraint, there\u2019s still an irritatingly inverse correlation between ease of use and complexity of task. <\/p>\n<p>AI has promised to change that. The idea is that when AI is embodied within robots\u2014giving AI software a physical presence in the world\u2014those robots will be imbued with reasoning and understanding. This is cutting-edge stuff, though, and while we\u2019ve seen plenty of examples of embodied AI in a research context, finding applications where reasoning robots can provide reliable commercial value has not been easy. <a href=\"https:\/\/bostondynamics.com\/\" target=\"_blank\" rel=\"nofollow noopener\">Boston Dynamics<\/a> is one of the few companies to commercially deploy <a href=\"https:\/\/spectrum.ieee.org\/tag\/legged-robots\" rel=\"nofollow noopener\" target=\"_blank\">legged robots<\/a> at any appreciable scale; there are now several thousand hard at work. Today the company is <a href=\"https:\/\/bostondynamics.com\/blog\/tools-for-your-to-do-list-with-spot-and-gemini-robotics\/\" target=\"_blank\" rel=\"nofollow noopener\">announcing<\/a> that its quadruped robot <a href=\"https:\/\/spectrum.ieee.org\/tag\/spot-robot\" target=\"_self\" rel=\"nofollow noopener\">Spot<\/a> is now equipped with <a href=\"https:\/\/deepmind.google\/blog\/gemini-robotics-er-1-6\/\" rel=\"nofollow noopener\" target=\"_blank\">Google DeepMind\u2019s Gemini Robotics-ER 1.6<\/a>, a <a href=\"https:\/\/spectrum.ieee.org\/gemini-robotics\" target=\"_blank\" rel=\"nofollow noopener\">high-level embodied reasoning model<\/a> that brings <a href=\"https:\/\/spectrum.ieee.org\/tag\/usability\" rel=\"nofollow noopener\" target=\"_blank\">usability<\/a> and intelligence to complex tasks.<\/p>\n<p class=\"shortcode-media shortcode-media-youtube\"> YouTube.com<\/p>\n<p>Although this video shows Spot in a home context, the focus of this partnership is on one of the very few applications where legged robots have proven themselves to be commercially viable: inspection. That is, wandering around industrial facilities, checking to make sure that nothing is imminently exploding. With the new AI onboard, Spot is now able to autonomously look for dangerous debris or spills, read complex gauges and sight glasses, and call on tools like vision-language-action models when it needs help understanding what\u2019s going on in the environment around it.<\/p>\n<p>\u201cAdvances like Gemini Robotics-ER 1.6 mark an important step toward robots that can better understand and operate in the physical world,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/marco-da-silva-447b72\/\" target=\"_blank\" rel=\"nofollow noopener\">Marco da Silva<\/a>, vice president and general manager of Spot at <a href=\"https:\/\/spectrum.ieee.org\/tag\/boston-dynamics\" rel=\"nofollow noopener\" target=\"_blank\">Boston Dynamics<\/a>, says <a href=\"https:\/\/bostondynamics.com\/blog\/aivi-learning-now-powered-google-gemini-robotics\/\" target=\"_blank\" rel=\"nofollow noopener\">in a press release<\/a>. \u201cCapabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously.\u201d<\/p>\n<p>Understanding Robot Understanding<\/p>\n<p>The words \u201creasoning\u201d and \u201cunderstanding\u201d are being increasingly applied to AI and <a href=\"https:\/\/spectrum.ieee.org\/topic\/robotics\/\" rel=\"nofollow noopener\" target=\"_blank\">robotics<\/a>, but as <a href=\"https:\/\/spectrum.ieee.org\/humanoid-robots-gill-pratt-darpa\" target=\"_self\" rel=\"nofollow noopener\">Toyota Research Institute\u2019s Gill Pratt recently pointed out<\/a>, what those words actually mean for robots in practice isn\u2019t always clear. \u201cThe benchmark we measure ourselves against when it comes to understanding is that the system should answer the way a human would,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/carolinaparada\/\" target=\"_blank\" rel=\"nofollow noopener\">Carolina Parada<\/a>, head of robotics at <a href=\"https:\/\/spectrum.ieee.org\/tag\/google\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a> <a href=\"https:\/\/spectrum.ieee.org\/tag\/deepmind\" rel=\"nofollow noopener\" target=\"_blank\">DeepMind<\/a>, explained in an interview. For robots to reliably and safely perform tasks, this connection between how robots understand the world and how humans do is critical. Otherwise, there may be a disconnect between the instructions that a human gives a robot, and how the robot decides to carry out that task.<\/p>\n<p>Boston Dynamics\u2019 video above is a potentially messy example of this. One of the instructions to Spot was to \u201crecycle any cans in the living room.\u201d It has no problem completing the task, as the video shows, but in doing so, it grips the can sideways, which is not going to end up well for cans that have leftover liquid in them. We humans would avoid this because we can draw on a lifetime of experience to know how cans should be held, but robots don\u2019t (yet) have that kind of world knowledge.<\/p>\n<p>Parada says that Gemini Robotics-ER 1.6 approaches situations like this from a safety perspective. \u201cIf you ask the robot to bring you a cup of water, it will reason not to place it on the edge of a table where it could fall. We track this using our <a href=\"https:\/\/asimov-benchmark.github.io\/v1\/\" target=\"_blank\" rel=\"nofollow noopener\">ASIMOV benchmark<\/a>, which includes a whole lot of natural language examples of things the robot should not do.\u201d The current version of Spot doesn\u2019t use these semantic safety models for manipulation, but the plan is to make future versions reason about holding objects in ways that are safe.<\/p>\n<p class=\"shortcode-media shortcode-media-youtube\" style=\"background-color: rgb(255, 255, 255);\">YouTube.com<\/p>\n<p>There does still seem to be a disconnect between Gemini Robotics-ER 1.6 as a high-level reasoning model for a robot, and the robot itself as an interface with the physical world. One of the new features of 1.6 is success detection, which combines multiple camera angles to more reliably be able to tell when Spot has successfully grasped an object. This is great if you\u2019re relying entirely on vision for your object interaction, but robots have all kinds of other well-established ways to detect a successful grasp, including touch sensors and force sensors, that 1.6 is not using. The reason why this is the case speaks to a fundamental problem that the robotics field is still trying to figure out: how to train models when you need physical data.<\/p>\n<p>\u201cAt the moment, these models are strictly vision only,\u201d Parada explains. \u201cThere is lots of [visual] information on the web about how to pick up a pen. If we had enough data with touch information, we could easily learn it, but there is not a lot of data with touch sensing on the internet.\u201d Customers who use these new capabilities for inspection with Spot will be required to share their data with Boston Dynamics, which is where some of this data will come from.<\/p>\n<p>Real-World Robots That Are Useful<\/p>\n<p>The fact that Boston Dynamics has customers makes them something of an anomaly when it comes to legged robots that rely on AI in commercial deployments. And those customers will have to be able to trust the robot\u2014<a href=\"https:\/\/spectrum.ieee.org\/ai-hallucination\" target=\"_self\" rel=\"nofollow noopener\">always a problem when AI is involved<\/a>. \u201cWe take this very seriously,\u201d da Silva said in an interview. \u201cWe roll out new DeepMind capabilities through beta programs to a smaller set of customers to understand what to anticipate, and we only actively advertise features we are confident will work.\u201d There\u2019s a threshold of usefulness that robots like Spot need to reach, and fortunately, the real world doesn\u2019t demand perfection. \u201cMost critical infrastructure in a facility will be instrumented to tell you whether something is wrong,\u201d da Silva says. \u201cBut there is a lot of stuff that is not instrumented that can still cause a problem if you aren\u2019t paying attention to it. We\u2019ve found that somewhere north of 80 percent is the threshold where it\u2019s not annoying. Below that, basically the robot is crying wolf, and the operators will start ignoring it.\u201d<\/p>\n<p>Both da Silva and Parada agree that there\u2019s still plenty of room for improvement in robotic inspection. As Parada points out, Spot\u2019s rarefied status as a scalable commercial platform provides a valuable opportunity to learn how models like Gemini Robotics-ER 1.6 can be the most useful, and then apply that knowledge to other embodied AI platforms, including <a href=\"https:\/\/spectrum.ieee.org\/boston-dynamics-atlas-scott-kuindersma\" target=\"_self\" rel=\"nofollow noopener\">Boston Dynamics\u2019 Atlas<\/a>. Does that mean that Atlas is going to be the next industrial inspection robot? Probably not. But if this real-world experience can get us closer to safe and reliable robots that can pick up laundry, take a dog for a walk, and clear away soda cans without making a mess, that\u2019s something we can all get excited about.<\/p>\n<p>From Your Site Articles<\/p>\n<p>Related Articles Around the Web<\/p>\n","protected":false},"excerpt":{"rendered":"The amazing and frustrating thing about robots is that they can do almost anything you want them to&hellip;\n","protected":false},"author":2,"featured_media":9235,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[7917,5044,132,7543,7953,7954,7952],"class_list":{"0":"post-9234","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-google","8":"tag-boston-dynamics","9":"tag-deepmind","10":"tag-google","11":"tag-google-deepmind","12":"tag-inspection-robots","13":"tag-quadruped-robots","14":"tag-spot-robot"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/9234","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=9234"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/9234\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/9235"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=9234"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=9234"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=9234"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}