Federated reinforcement learning (FedRL) is an emerging paradigm in data-driven control where a group of decision-making agents cooperate to learn optimal control laws through a distributed reinforcement learning procedure, with the peculiarity of having the constraints of not sharing any process/control data. In the typical FedRL setting, a centralized entity is responsible for orchestrating the distributed training process. To remove this design limitation, this work proposes a solution to enable a fully decentralized approach leveraging on results from consensus theory. The proposed algorithm, named FedRLCon, can then deal with: 1) scenarios with homogeneous agents, which can share their actor and, possibly, the critic networks; 2) scenarios with heterogeneous agents, in which agents may share their critic network only. The proposed algorithms are validated on two scenarios, consisting of a resource management problem in a communication network and a smart grid case study. Our tests show that practically no performance is lost for the decentralization.

Enhancing Federated Reinforcement Learning: A Consensus-based Approach for Both Homogeneous and Heterogeneous Agents / Giuseppi, Alessandro; Menegatti, Danilo; Pietrabissa, Antonio. - In: MACHINE INTELLIGENCE RESEARCH. - ISSN 2731-5398. - 22:5(2025), pp. 929-940. [10.1007/s11633-025-1550-8]

Enhancing Federated Reinforcement Learning: A Consensus-based Approach for Both Homogeneous and Heterogeneous Agents

Giuseppi, Alessandro
;
Menegatti, Danilo;Pietrabissa, Antonio
2025

Abstract

Federated reinforcement learning (FedRL) is an emerging paradigm in data-driven control where a group of decision-making agents cooperate to learn optimal control laws through a distributed reinforcement learning procedure, with the peculiarity of having the constraints of not sharing any process/control data. In the typical FedRL setting, a centralized entity is responsible for orchestrating the distributed training process. To remove this design limitation, this work proposes a solution to enable a fully decentralized approach leveraging on results from consensus theory. The proposed algorithm, named FedRLCon, can then deal with: 1) scenarios with homogeneous agents, which can share their actor and, possibly, the critic networks; 2) scenarios with heterogeneous agents, in which agents may share their critic network only. The proposed algorithms are validated on two scenarios, consisting of a resource management problem in a communication network and a smart grid case study. Our tests show that practically no performance is lost for the decentralization.
2025
control systems; distributed control; federated learning; intelligent control; Reinforcement learning
01 Pubblicazione su rivista::01a Articolo in rivista
Enhancing Federated Reinforcement Learning: A Consensus-based Approach for Both Homogeneous and Heterogeneous Agents / Giuseppi, Alessandro; Menegatti, Danilo; Pietrabissa, Antonio. - In: MACHINE INTELLIGENCE RESEARCH. - ISSN 2731-5398. - 22:5(2025), pp. 929-940. [10.1007/s11633-025-1550-8]
File allegati a questo prodotto
File Dimensione Formato  
Giuseppi_Enhancing-Federated_2025.pdf

accesso aperto

Note: https://link.springer.com/content/pdf/10.1007/s11633-025-1550-8.pdf
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.51 MB
Formato Adobe PDF
1.51 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1752379
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact