{"id":13021,"date":"2025-06-25T08:24:08","date_gmt":"2025-06-25T08:24:08","guid":{"rendered":"https:\/\/www.europesays.com\/us\/13021\/"},"modified":"2025-06-25T08:24:08","modified_gmt":"2025-06-25T08:24:08","slug":"googles-gemini-ai-now-powers-robots-without-internet-access","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/13021\/","title":{"rendered":"Google\u2019s Gemini AI Now Powers Robots Without Internet Access"},"content":{"rendered":"<p><b>New Delhi<\/b>: In a major leap for edge robotics,<b> Google DeepMind <\/b>has introduced Gemini Robotics On-Device, a new AI model that enables robots to function without needing an internet connection. This development brings greater autonomy, speed, and data privacy to real-world robotics, especially in locations where connectivity is limited or restricted.  <\/p>\n<p>Carolina Parada, head of robotics at Google DeepMind, described the release as a practical shift toward making robots more independent. \u201cIt\u2019s small and efficient enough to run directly on a robot,\u201d she told The Verge. \u201cI would think about it as a starter model or as a model for applications that just have poor connectivity.\u201d  <\/p>\n<p>Despite being a more compact version of its cloud-based predecessor, the on-device variant is surprisingly robust. \u201cWe\u2019re actually quite surprised at how strong this on-device model is,\u201d Parada added, pointing to its effectiveness even with minimal training.  <\/p>\n<p>The model can perform tasks almost immediately after deployment and requires only 50 to 100 demonstrations to learn new ones. Initially developed using <b>Google\u2019s ALOHA robot, <\/b>it has since been adapted to other robotic systems including Apptronik\u2019s Apollo humanoid and the dual-armed Franka FR3.  <\/p>\n<p>Tasks such as folding laundry or unzipping bags can now be executed entirely on-device, without latency caused by cloud interaction. This is a key differentiator compared to other advanced systems like Tesla\u2019s Optimus, which still rely on cloud connectivity for processing.  <\/p>\n<p>The local processing aspect is a highlight for sectors that prioritize data security, such as healthcare or sensitive industrial settings. \u201cWhen we play with the robots, we see that they\u2019re surprisingly capable of understanding a new situation,\u201d Parada noted, emphasizing the model\u2019s flexibility and adaptability.  <\/p>\n<p>However, Google acknowledges some trade-offs. Unlike the cloud-based Gemini Robotics suite, the on-device model lacks built-in semantic safety tools. Developers are encouraged to implement safety mechanisms independently, using APIs like Gemini Live and integrating with low-level robotic safety systems. \u201cWith the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,\u201d said Parada.  <\/p>\n<p>This announcement follows <b>Google<\/b>\u2019s recent launch of the AI Edge Gallery, an Android-based app that lets users run generative AI models offline using the compact Gemma 3 1B model. Much like Gemini Robotics On-Device, this app focuses on privacy-first, low-latency experiences using frameworks like TensorFlow Lite and open-source models from Hugging Face.  <\/p>\n<p>Together, these launches signal Google\u2019s broader move to decentralize AI, bringing high-performance intelligence directly to user devices\u2014be it phones or robots.<\/p>\n","protected":false},"excerpt":{"rendered":"New Delhi: In a major leap for edge robotics, Google DeepMind has introduced Gemini Robotics On-Device, a new&hellip;\n","protected":false},"author":3,"featured_media":13022,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[19],"tags":[13780,13777,13778,712,13779,13781,158,67,132,68],"class_list":{"0":"post-13021","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-internet","8":"tag-edge-robotics","9":"tag-gemini-robotics-on-device","10":"tag-google-deepmind-ai","11":"tag-internet","12":"tag-offline-robot-ai","13":"tag-tech-news","14":"tag-technology","15":"tag-united-states","16":"tag-unitedstates","17":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/114743048042170239","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/13021","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=13021"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/13021\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/13022"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=13021"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=13021"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=13021"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}