This paper investigates the use of reinforcement learning for the robust design of low-thrust interplanetary trajectories in presence of severe uncertainties and disturbances, alternately modeled as Gaussian additive process noise, observation noise, and random errors in the actuation of the thrust control, including the occurrence of a missed thrust event. The stochastic optimal control problem is recast as a time-discrete Markov decision process to comply with the standard formulation of reinforcement learning. An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted to carry out the training process of a deep neural network, used to map the spacecraft (observed) states to the optimal control policy. The resulting guidance and control network provides both a robust nominal trajectory and the associated closed-loop guidance law. Numerical results are presented for a typical Earth–Mars mission. To validate the proposed approach, the solution found in a (deterministic) unperturbed scenario is first compared with the optimal one provided by an indirect technique. The robustness and optimality of the obtained closed-loop guidance laws is then assessed by means of Monte Carlo campaigns performed in the considered uncertain scenarios.

Reinforcement learning for robust trajectory design of interplanetary missions / Zavoli, Alessandro; Federici, Lorenzo. - In: JOURNAL OF GUIDANCE CONTROL AND DYNAMICS. - ISSN 0731-5090. - 44:8(2021), pp. 1440-1453. [10.2514/1.G005794]

Reinforcement learning for robust trajectory design of interplanetary missions

Zavoli, Alessandro
;
Federici, Lorenzo
2021

Abstract

This paper investigates the use of reinforcement learning for the robust design of low-thrust interplanetary trajectories in presence of severe uncertainties and disturbances, alternately modeled as Gaussian additive process noise, observation noise, and random errors in the actuation of the thrust control, including the occurrence of a missed thrust event. The stochastic optimal control problem is recast as a time-discrete Markov decision process to comply with the standard formulation of reinforcement learning. An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted to carry out the training process of a deep neural network, used to map the spacecraft (observed) states to the optimal control policy. The resulting guidance and control network provides both a robust nominal trajectory and the associated closed-loop guidance law. Numerical results are presented for a typical Earth–Mars mission. To validate the proposed approach, the solution found in a (deterministic) unperturbed scenario is first compared with the optimal one provided by an indirect technique. The robustness and optimality of the obtained closed-loop guidance laws is then assessed by means of Monte Carlo campaigns performed in the considered uncertain scenarios.
2021
reinforcement learning; optimization; spaceflight;
01 Pubblicazione su rivista::01a Articolo in rivista
Reinforcement learning for robust trajectory design of interplanetary missions / Zavoli, Alessandro; Federici, Lorenzo. - In: JOURNAL OF GUIDANCE CONTROL AND DYNAMICS. - ISSN 0731-5090. - 44:8(2021), pp. 1440-1453. [10.2514/1.G005794]
File allegati a questo prodotto
File Dimensione Formato  
Zavoli_Preprint_Reinforcement_2021.pdf

accesso aperto

Note: https://arc.aiaa.org/doi/10.2514/1.G005794
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 2.13 MB
Formato Adobe PDF
2.13 MB Adobe PDF
Zavoli_Reinforcement_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.27 MB
Formato Adobe PDF
1.27 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1560071
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 85
  • ???jsp.display-item.citation.isi??? 52
social impact