Businesses may be struggling to find meaningful ways to use artificial intelligence software, but space scientists at least have a few ideas about how to deploy AI models.
Researchers affiliated with Caltech, NASA’s Jet Propulsion Laboratory, UCLA, and engineering consultancy Okean Solutions are busy exploring how AI models can be used to run autonomous operations during space missions.
They call their project AI Space Cortex. It’s an attempt to design an AI-assisted autonomy framework capable of handling the various tasks required to take samples from the surface of alien worlds, ranging from site selection to sample collection and depositing samples in scientific instruments.
NASA’s Robust, Explainable Autonomy for Scientific Icy Moon Operations (REASIMO) effort conducted the project, using funding from NASA’s Concepts for Ocean Worlds Life Detection Technology (COLDTech) program. It reflects work intended to support a future mission like Europa Lander, a proposal to look for life on Jupiter’s icy moon.
“Unlike previous approaches that largely rely on deterministic task execution, the AI Space Cortex integrates hierarchical control, active vision-based scene analysis, real-time telemetry assessment, self-diagnosing, and recalibration capabilities,” explains the preprint paper that describes the project.
AI Space Cortex’s designers envision a robotic system that autonomously runs science missions in real-time while checking on its own health. To do so, it relies on computer vision and large language models.
For this project, the model consisted of OpenAI’s GPT-4o, accessed via API rather than running locally. Future implementations, the paper explains, will explore local model deployment. Doing so will require model size optimizations and balancing inference speed with energy consumption given planetary landers operate under strict power constraints.
“Using a combination of foundational computer vision models and large language models (LLM), the Cortex can interpret and reason about its surroundings, making informed decisions without direct human intervention,” the paper explains. “In addition to environmental analysis, the AI Space Cortex detects faults in real-time, self-diagnoses the source of anomalies, and executes recovery procedures as necessary.”
Thomas Touma, a robotics research scientist at Caltech, CEO of Stealth Labs, and also one of the paper’s dozen authors, described the project in an interview with The Register.
“So imagine you have a system with a robotic arm, and an alien comes by, bends your arm, you crash it into a rock, something happens, right?” he said. “So now you’ve lost this pretty expensive piece of hardware.
“In this case, we were trying to devise a system where it could not only detect faults, but also recalibrate itself in the event of a problem. So it would save you a couple hundred million, maybe a billion dollars, if something goes wrong.”
Vision models, he explained, allow for image segmentation, a technique for identifying and separating depicted objects from background pixels. This allows a model to pick out specific points on a robot arm to recalibrate, for example. It might require more compute power than other approaches but the result is less difficult, low-level work.
These models, said Touma, “are actually saving us a lot of time in many ways.”
AI Space Cortex, Touma explained, is a system concept that was field tested at JPL in the Ocean Worlds Lander Autonomy Testbed (OWLAT) facility that evaluates autonomous sampling technology.
Designed to run on a system with an 8-core CPU, 8GB RAM and an NVIDIA 6GB laptop GPU, AI Space Cortex relied on three-phase wall power of around 400 watts and wasn’t an attempt to address power and thermal issues in space.
The focus, he said, was on functionality and system capabilities. Scaling is a problem for another day, one that may be benefit from expected advances in model quantization and inference hardware over time.
No space gremlins
While AI models are known for hallucinating – making errors – Touma said that wasn’t a problem due to system boundaries.
“The system we tested is a robotic arm,” he explained. “So there are a finite amount of things it can do. It can crash out in the space that it has, but it can’t run through a wall. And there are so many low-level hard stops, safety guards inside the system.”
These include hardcoded torque and current limits.
The AI model will also explain things, so that mission operators can understand what the camera is seeing. Operators can ask the AI to offer an opinion about a scene, whether specific terrain looks safe or deep, for example.
“I actually found through all the experiments that it was quite lucid,” Touma said. “I didn’t see any weird hallucinations. Sometimes it would give you a very wrong opinion, but extremely rarely actually. And sometimes that could also be a result of us feeding it an ambiguous image.”
Touma said he was very impressed by the capabilities of the AI models tested, pointing to the work done on automatic system recalibration. The researchers, he explained, damaged the robot arm so that it no longer operated accurately – it missed objects it had been directed to pick up.
But with the help of the vision models and a Bayesian optimization framework, AI Space Cortex was able to recalibrate the damaged arm back to accuracy of more than 90 percent, using only the camera for assessment.
AI Space Cortex isn’t intended to be part of a planned space mission, but it will inform future Caltech/JPL projects.
“We’re using what we learned to implement it on other projects,” Touma said. “I probably shouldn’t say what they are.” ®