The massive amount of data collected in the Internet of Things (IoT) asks for effective, intelligent analytics. A recent trend supporting the use of Artificial Intelligence (AI) solutions in IoT domains is to move the computation closer to the data, i.e., from cloud-based services to edge devices. Federated learning (FL) is the primary approach adopted in this scenario to train AI-based solutions. In this work, we investigate the introduction of quantization techniques in FL to improve the efficiency of data exchange between edge servers and a cloud node. We focus on learning recurrent neural network models fed by edge data producers using the most widely adopted neural networks for time-series prediction. Experiments on public datasets show that the proposed quantization techniques in FL reduces up to 19× the volume of data exchanged between each edge server and a cloud node, with a minimal impact of around 5% on the test loss of the final model.

Neural network quantization in federated learning at the edge / Tonellotto, N.; Gotta, A.; Nardini, F. M.; Gadler, D.; Silvestri, F.. - In: INFORMATION SCIENCES. - ISSN 0020-0255. - 575:(2021), pp. 417-436. [10.1016/j.ins.2021.06.039]

Neural network quantization in federated learning at the edge

Tonellotto N.
;
Silvestri F.
2021

Abstract

The massive amount of data collected in the Internet of Things (IoT) asks for effective, intelligent analytics. A recent trend supporting the use of Artificial Intelligence (AI) solutions in IoT domains is to move the computation closer to the data, i.e., from cloud-based services to edge devices. Federated learning (FL) is the primary approach adopted in this scenario to train AI-based solutions. In this work, we investigate the introduction of quantization techniques in FL to improve the efficiency of data exchange between edge servers and a cloud node. We focus on learning recurrent neural network models fed by edge data producers using the most widely adopted neural networks for time-series prediction. Experiments on public datasets show that the proposed quantization techniques in FL reduces up to 19× the volume of data exchanged between each edge server and a cloud node, with a minimal impact of around 5% on the test loss of the final model.
2021
Artificial neural networks; Federated learning; Internet of Things; Quantization
01 Pubblicazione su rivista::01a Articolo in rivista
Neural network quantization in federated learning at the edge / Tonellotto, N.; Gotta, A.; Nardini, F. M.; Gadler, D.; Silvestri, F.. - In: INFORMATION SCIENCES. - ISSN 0020-0255. - 575:(2021), pp. 417-436. [10.1016/j.ins.2021.06.039]
File allegati a questo prodotto
File Dimensione Formato  
Tonellotto_Neural-network_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.25 MB
Formato Adobe PDF
1.25 MB Adobe PDF   Contatta l'autore
Tonellotto_preprint_Neural-network_2021..pdf

accesso aperto

Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.1 MB
Formato Adobe PDF
1.1 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1571576
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 39
  • ???jsp.display-item.citation.isi??? 30
social impact