OpenMind’s new BrainPack platform brings several core functions of robot autonomy into one modular unit. Instead of separate systems for sensing, mapping, and control, BrainPack combines advanced mapping, object labeling, privacy-protected vision, remote operation, and self-charging in a single backpack-sized device. 

The platform runs on one of Nvidia’s high-performance processing units, giving robots the ability to handle perception and decision-making directly on board rather than relying heavily on external computing.

BrainPack is the same system used in OpenMind’s recent demonstrations of self-charging quadrupeds. Now, the company is making it available to developers, research labs, and early adopters internationally. CEO Jan Liphardt said that the goal is to narrow the gap between robotics and intelligence so that robots can not only move, but also interpret and learn from their surroundings.

Blending research reliability with consumer-level usability 

OpenMind’s CTO Boyuan Chen explains that BrainPack is designed to combine the reliability of a research-grade system with the simplicity and usability of a consumer device. The goal is to make autonomous robotics accessible without needing specialized labs or complex setups. Users can see through the robot’s sensors, guide its learning, and trust that the information it collects is accurate and private.

The platform also acts as the central nervous system for robots, compactly combining high-performance computing, sensors, and software to create a machine that can learn from its surroundings. With it, a robot doesn’t just move – it observes, records, and makes sense of the world around it, while building detailed 3D maps of its environment in real time, reconstructing scenes as it goes. 

The system can recognize and label objects on its own, generating datasets and building a memory of everything it encounters. Privacy is built in, with automatic face detection and blurring to keep humans in view, anonymized. Users can also remotely control the robot and stream video securely from any device, while self-docking charging ensures the robot can operate continuously without interruption.

Robots map, patrol, and self-charge 

BrainPack allows robots to gain advanced capabilities through simple plug-and-play integration, making them able to operate in homes, research labs, and public spaces. Early testing by OpenMind has shown that BrainPack-equipped robots can perform self-guided patrols, map multi-room environments, recognize and label objects, and return to charging stations automatically, all without direct supervision. 

The platform demonstrates a practical way to give robots useful autonomy while keeping setup and operation straightforward. Earlier this year, the startup introduced FABRIC, a protocol that lets robots verify identity and share context with each other. The system enables machines to learn from one another in real time, adapt to new environments, and communicate more effectively, making collaboration between humans and robots smoother and more intuitive.

Furthermore, OpenMind has developed OM1, a hardware-agnostic operating system for humanoid robots. OM1 is built from the ground up as an AI-native platform, capable of running on a wide range of robotic hardware and acting as a universal brain for machines.