The aim of this work is to develop an application for autonomous landing, exploiting the properties of Deep Reinforcement Learning and Transfer Learning in order to tackle the problem of planetary landing on unknown or barely-known extra-terrestrial environments by learning good-performing policies, which are transferable from the training environment to other, new environments, without losing optimality. To this end, we model a real-physics simulator, by means of the Bullet/PyBullet library, composed by a lander, defined through the standard ROS/URDF framework and realistic 3D terrain models, for which we adapt official NASA 3D meshes, reconstructed from the data retrieved during missions. Where such models are not available, we reconstruct the terrain from mission imagery-generally SAR imagery. In this setup, we train a Deep Reinforcement Learning model-using DDPG and SAC, then comparing the outcomes-to autonomously land on the lunar environment. Moreover, we perform transfer learning on Mars and Titan environments. While still preliminary, our results show that DDPG and SAC can learn good landing policies, that can be transferred to other environments. Good policies can be learned by the SAC algorithm also in the case of atmospheric disturbances-e.g. gusts.

Learning transferable policies for autonomous planetary landing via deep reinforcement learning / Ciabatti, G.; Daftry, S.; Capobianco, R.. - (2021). (Intervento presentato al convegno Accelerating Space Commerce, Exploration, and New Discovery conference, ASCEND 2021 tenutosi a Las Vegas, Nevada USA) [10.2514/6.2021-4006].

Learning transferable policies for autonomous planetary landing via deep reinforcement learning

Ciabatti G.
Primo
;
Capobianco R.
Ultimo
2021

Abstract

The aim of this work is to develop an application for autonomous landing, exploiting the properties of Deep Reinforcement Learning and Transfer Learning in order to tackle the problem of planetary landing on unknown or barely-known extra-terrestrial environments by learning good-performing policies, which are transferable from the training environment to other, new environments, without losing optimality. To this end, we model a real-physics simulator, by means of the Bullet/PyBullet library, composed by a lander, defined through the standard ROS/URDF framework and realistic 3D terrain models, for which we adapt official NASA 3D meshes, reconstructed from the data retrieved during missions. Where such models are not available, we reconstruct the terrain from mission imagery-generally SAR imagery. In this setup, we train a Deep Reinforcement Learning model-using DDPG and SAC, then comparing the outcomes-to autonomously land on the lunar environment. Moreover, we perform transfer learning on Mars and Titan environments. While still preliminary, our results show that DDPG and SAC can learn good landing policies, that can be transferred to other environments. Good policies can be learned by the SAC algorithm also in the case of atmospheric disturbances-e.g. gusts.
2021
Accelerating Space Commerce, Exploration, and New Discovery conference, ASCEND 2021
reinforcement learning; autonomous landing; artificial intelligence
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Learning transferable policies for autonomous planetary landing via deep reinforcement learning / Ciabatti, G.; Daftry, S.; Capobianco, R.. - (2021). (Intervento presentato al convegno Accelerating Space Commerce, Exploration, and New Discovery conference, ASCEND 2021 tenutosi a Las Vegas, Nevada USA) [10.2514/6.2021-4006].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1604061
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact