Future exploration and human missions on large planetary bodies (e.g., moon, Mars) will require advanced guidance navigation and control algorithms for the powered descent phase, which must be capable of unprecedented levels of autonomy. The advent of machine learning, and specifically reinforcement earning, has enabled new possibilities for closed-loop autonomous guidance and navigation. In this paper, image-based reinforcement meta-learning is applied to solve the lunar pinpoint powered descent and landing task with uncertain dynamic parameters and actuator failure. The agent, a deep neural network, takes real-time images and ranging observations acquired during the descent and maps them directly to thrust command (i.e., sensor-to-action policy). Training and validation of the algorithm and Monte Carlo simulations shows that the resulting closed-loop guidance policy reaches errors in the order of meters in different scenarios, even when the environment is partially observed, and the state of the spacecraft is not fully known.

Image-based Deep Reinforcement Meta-Learning for Autonomous Lunar Landing / Scorsoglio, Andrea; D'Ambrosio, Andrea; Ghilardi, Luca; Gaudet, Brian; Curti, Fabio; Furfaro, Roberto. - In: JOURNAL OF SPACECRAFT AND ROCKETS. - ISSN 0022-4650. - (2021), pp. 1-13. [10.2514/1.A35072]

Image-based Deep Reinforcement Meta-Learning for Autonomous Lunar Landing

Andrea D’Ambrosio;Fabio Curti;
2021

Abstract

Future exploration and human missions on large planetary bodies (e.g., moon, Mars) will require advanced guidance navigation and control algorithms for the powered descent phase, which must be capable of unprecedented levels of autonomy. The advent of machine learning, and specifically reinforcement earning, has enabled new possibilities for closed-loop autonomous guidance and navigation. In this paper, image-based reinforcement meta-learning is applied to solve the lunar pinpoint powered descent and landing task with uncertain dynamic parameters and actuator failure. The agent, a deep neural network, takes real-time images and ranging observations acquired during the descent and maps them directly to thrust command (i.e., sensor-to-action policy). Training and validation of the algorithm and Monte Carlo simulations shows that the resulting closed-loop guidance policy reaches errors in the order of meters in different scenarios, even when the environment is partially observed, and the state of the spacecraft is not fully known.
2021
aerospace engineering; space systems; GNC; optical navigation; reinforcement meta-learning; autonomous landing
01 Pubblicazione su rivista::01a Articolo in rivista
Image-based Deep Reinforcement Meta-Learning for Autonomous Lunar Landing / Scorsoglio, Andrea; D'Ambrosio, Andrea; Ghilardi, Luca; Gaudet, Brian; Curti, Fabio; Furfaro, Roberto. - In: JOURNAL OF SPACECRAFT AND ROCKETS. - ISSN 0022-4650. - (2021), pp. 1-13. [10.2514/1.A35072]
File allegati a questo prodotto
File Dimensione Formato  
Scorsoglio_Image-Based_2021.pdf

solo gestori archivio

Note: https://arc.aiaa.org/doi/10.2514/1.A35072
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.02 MB
Formato Adobe PDF
2.02 MB Adobe PDF   Contatta l'autore
Scorsoglio_postprint_Image-Based_2021.pdf.pdf

accesso aperto

Note: https://arc.aiaa.org/journal/jsr
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 4.09 MB
Formato Adobe PDF
4.09 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1576277
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 27
  • ???jsp.display-item.citation.isi??? 23
social impact