This paper presents a Federated Learning (FL) algorithm that allows the decentralization of all FL solutions that employ a model-averaging procedure. The proposed algorithm proves to be capable of attaining faster convergence rates and no performance loss against the starting centralized FL implementation with a reduced communication overhead compared to existing consensus-based and centralized solutions. To this end, a Multi-Hop consensus protocol, originally presented in the scope of dynamical system consensus theory, leveraging on standard Lyapunov stability discussions, has been proposed to assure that all federation clients share the same average model employing only information obtained from their m-step neighbours. Experimental results on different communication topologies and the MNIST and MedMNIST v2 datasets validate the algorithm properties demonstrating a performance drop, compared with centralized FL setting, of about 1%.

A Discrete-Time Multi-Hop Consensus Protocol for Decentralized Federated Learning / Menegatti, D.; Giuseppi, A.; Manfredi, S.; Pietrabissa, A.. - In: IEEE ACCESS. - ISSN 2169-3536. - 11:(2023), pp. 80613-80623. [10.1109/ACCESS.2023.3299443]

A Discrete-Time Multi-Hop Consensus Protocol for Decentralized Federated Learning

Menegatti D.
;
Giuseppi A.;Pietrabissa A.
2023

Abstract

This paper presents a Federated Learning (FL) algorithm that allows the decentralization of all FL solutions that employ a model-averaging procedure. The proposed algorithm proves to be capable of attaining faster convergence rates and no performance loss against the starting centralized FL implementation with a reduced communication overhead compared to existing consensus-based and centralized solutions. To this end, a Multi-Hop consensus protocol, originally presented in the scope of dynamical system consensus theory, leveraging on standard Lyapunov stability discussions, has been proposed to assure that all federation clients share the same average model employing only information obtained from their m-step neighbours. Experimental results on different communication topologies and the MNIST and MedMNIST v2 datasets validate the algorithm properties demonstrating a performance drop, compared with centralized FL setting, of about 1%.
2023
discrete-time consensus; multi-hop; federated learning; distributed systems
01 Pubblicazione su rivista::01a Articolo in rivista
A Discrete-Time Multi-Hop Consensus Protocol for Decentralized Federated Learning / Menegatti, D.; Giuseppi, A.; Manfredi, S.; Pietrabissa, A.. - In: IEEE ACCESS. - ISSN 2169-3536. - 11:(2023), pp. 80613-80623. [10.1109/ACCESS.2023.3299443]
File allegati a questo prodotto
File Dimensione Formato  
Menegatti_A-Discrete_2023.pdf

solo gestori archivio

Note: DOI: 10.1109/ACCESS.2023.3299443
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.04 MB
Formato Adobe PDF
1.04 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1692333
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact