This paper focuses on the application of meta-reinforcement learning to the robust design of low-thrust interplanetary trajectories in the presence of multiple uncertainties. A closed-loop control policy is used to optimally steer the spacecraft to a final target state despite the considered perturbations. The control policy is approximated by a deep recurrent neural network, trained by policy-gradient reinforcement learning on a collection of environments featuring mixed sources of uncertainty, namely dynamic uncertainty and control execution errors. The recurrent network is able to build an internal representation of the distribution of environments, thus better adapting the control to the different stochastic scenarios. The results in terms of optimality, constraint handling, and robustness on a fuel-optimal low-thrust transfer between Earth and Mars are compared with those obtained via a traditional reinforcement learning approach based on a feed-forward neural network.
Robust interplanetary trajectory design under multiple uncertainties via meta-reinforcement learning / Federici, Lorenzo; Zavoli, Alessandro. - In: ACTA ASTRONAUTICA. - ISSN 0094-5765. - 214:(2024), pp. 147-158. [10.1016/j.actaastro.2023.10.018]
Robust interplanetary trajectory design under multiple uncertainties via meta-reinforcement learning
Federici, Lorenzo;Zavoli, Alessandro
2024
Abstract
This paper focuses on the application of meta-reinforcement learning to the robust design of low-thrust interplanetary trajectories in the presence of multiple uncertainties. A closed-loop control policy is used to optimally steer the spacecraft to a final target state despite the considered perturbations. The control policy is approximated by a deep recurrent neural network, trained by policy-gradient reinforcement learning on a collection of environments featuring mixed sources of uncertainty, namely dynamic uncertainty and control execution errors. The recurrent network is able to build an internal representation of the distribution of environments, thus better adapting the control to the different stochastic scenarios. The results in terms of optimality, constraint handling, and robustness on a fuel-optimal low-thrust transfer between Earth and Mars are compared with those obtained via a traditional reinforcement learning approach based on a feed-forward neural network.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.