A great deal of research interest is currently granted to enhance the autonomy of space systems, with particular attention to future on-orbit servicing and docking operations. In this scenario, modern machine learning algorithms are a key asset to the development of such activities. This paper aims at giving a contribution in this field by implementing a Deep Reinforcement Learning (DRL) Actor/Critic approach as a feedback control law for a three-degrees-of-freedom autonomous docking manoeuvre. In detail, the agent's policy is estimated to map a set of generally available observations (i.e. spacecraft attitude, position and corresponding rates) to a group of actions (represented by the commands exerted on the chaser spacecraft) to maximize a given reward signal. The policy is learned to successfully carry out the manoeuvre while both preventing collisions and respecting constraints in terms of docking conditions, without relying on pre-programmed reference controllers. To this purpose, the DRL framework is developed in Matlab/Simulink environment by coupling three different Matlab tools, namely Simscape Multibody to simulate the spacecraft dynamics, the Reinforcement Learning Toolbox to set-up the learning environment and the Deep Learning Toolbox to design the DLR policy Neural Networks. Finally, simulations are carried out to verify the efficacy of the proposed solution, aiming at offering ground for further developments.

A Deep Reinforcement Learning Approach to Autonomous Spacecraft Docking / Angeletti, Federica; Iannelli, Paolo; Gasbarri, Paolo. - (2021), pp. 929-935. (Intervento presentato al convegno XXVI AIDAA International Congress tenutosi a PISA).

A Deep Reinforcement Learning Approach to Autonomous Spacecraft Docking

Federica Angeletti;Paolo Iannelli;Paolo Gasbarri
2021

Abstract

A great deal of research interest is currently granted to enhance the autonomy of space systems, with particular attention to future on-orbit servicing and docking operations. In this scenario, modern machine learning algorithms are a key asset to the development of such activities. This paper aims at giving a contribution in this field by implementing a Deep Reinforcement Learning (DRL) Actor/Critic approach as a feedback control law for a three-degrees-of-freedom autonomous docking manoeuvre. In detail, the agent's policy is estimated to map a set of generally available observations (i.e. spacecraft attitude, position and corresponding rates) to a group of actions (represented by the commands exerted on the chaser spacecraft) to maximize a given reward signal. The policy is learned to successfully carry out the manoeuvre while both preventing collisions and respecting constraints in terms of docking conditions, without relying on pre-programmed reference controllers. To this purpose, the DRL framework is developed in Matlab/Simulink environment by coupling three different Matlab tools, namely Simscape Multibody to simulate the spacecraft dynamics, the Reinforcement Learning Toolbox to set-up the learning environment and the Deep Learning Toolbox to design the DLR policy Neural Networks. Finally, simulations are carried out to verify the efficacy of the proposed solution, aiming at offering ground for further developments.
2021
XXVI AIDAA International Congress
deep learning; proximity operations; docking
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
A Deep Reinforcement Learning Approach to Autonomous Spacecraft Docking / Angeletti, Federica; Iannelli, Paolo; Gasbarri, Paolo. - (2021), pp. 929-935. (Intervento presentato al convegno XXVI AIDAA International Congress tenutosi a PISA).
File allegati a questo prodotto
File Dimensione Formato  
Angeletti_A-deep_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 634.73 kB
Formato Adobe PDF
634.73 kB Adobe PDF   Contatta l'autore
Angeletti_forntespizio-indice_A-deep_2021.pdf

solo gestori archivio

Tipologia: Altro materiale allegato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.62 MB
Formato Adobe PDF
4.62 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1690337
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact