Precisely delivering robotic assets to strategic and scientifically compelling sites is key capability for the upcoming lunar and planetary exploration missions. Building upon the increased maturity of deep-learning models, hybrid approaches are emerging that infuse artificial intelligence models into traditional estimation frameworks, resulting in navigation schemes robust to illumination and environmental conditions encountered by the landing platform during the descent trajectory. In this work, a tightly-coupled visual-inertial navigation system is presented, which fuses inertial data with star tracker-based attitude estimates and visual measurements derived from both image-to-catalog crater matches and multi-frame feature tracking. Deep neural networks are integrated into the visual front-end to process the onboard imagery, executing crater detection and feature tracking to produce the corresponding image-based observations. Open-loop numerical simulations are carried out by running the navigation pipeline on a custom-off-the-shelf computing platform, providing insights on the estimation accuracies, resource consumption, and portability of Artificial Intelligence-assisted navigation framework in challenging mission scenarios.
An AI-assisted Vision-based Navigation Aapproach For Pinpoint Lunar Landing: Development, Validation and Testing / Andolfo, Simone; El Awag, Mohamed; Buonomo, Fabio Valerio; Gargiulo, Anna Maria; Cavalieri, Ludovica; Genova, Antonio. - (2025). ( 2025 AAS/AIAA Astrodynamics Specialist Conference Boston (MA); USA ).
An AI-assisted Vision-based Navigation Aapproach For Pinpoint Lunar Landing: Development, Validation and Testing
Simone Andolfo
;Mohamed El Awag;Fabio Valerio Buonomo;Anna Maria Gargiulo;Ludovica Cavalieri;Antonio Genova
2025
Abstract
Precisely delivering robotic assets to strategic and scientifically compelling sites is key capability for the upcoming lunar and planetary exploration missions. Building upon the increased maturity of deep-learning models, hybrid approaches are emerging that infuse artificial intelligence models into traditional estimation frameworks, resulting in navigation schemes robust to illumination and environmental conditions encountered by the landing platform during the descent trajectory. In this work, a tightly-coupled visual-inertial navigation system is presented, which fuses inertial data with star tracker-based attitude estimates and visual measurements derived from both image-to-catalog crater matches and multi-frame feature tracking. Deep neural networks are integrated into the visual front-end to process the onboard imagery, executing crater detection and feature tracking to produce the corresponding image-based observations. Open-loop numerical simulations are carried out by running the navigation pipeline on a custom-off-the-shelf computing platform, providing insights on the estimation accuracies, resource consumption, and portability of Artificial Intelligence-assisted navigation framework in challenging mission scenarios.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


