In the traditional cloud architecture, data needs to be uploaded to the cloud for processing, bringing delays in transmission and response. Edge network emerges as the times require. Data processing on the edge nodes can reduce the delay of data transmission and improve the response speed. In recent years, the need for artificial intelligence of edge network has been proposed. However, the data of a single, individual edge node is limited and does not satisfy the conditions of machine learning. Therefore, performing edge network machine learning under the premise of data confidentiality became a research hotspot. This paper proposes a Privacy-Preserving Asynchronous Federated Learning Mechanism for Edge Network Computing (PAFLM), which can allow multiple edge nodes to achieve more efficient federated learning without sharing their private data. Compared with the traditional distributed learning, the proposed method compresses the communications between nodes and parameter server during the training process without affecting the accuracy. Moreover, it allows the node to join or quit in any process of learning, which can be suitable to the scene with highly mobile edge devices.

Privacy-preserving asynchronous federated learning mechanism for edge network computing / Lu, X.; Liao, Y.; Lio, P.; Hui, P.. - In: IEEE ACCESS. - ISSN 2169-3536. - 8:(2020), pp. 48970-48981. [10.1109/ACCESS.2020.2978082]

Privacy-preserving asynchronous federated learning mechanism for edge network computing

Lio P.;
2020

Abstract

In the traditional cloud architecture, data needs to be uploaded to the cloud for processing, bringing delays in transmission and response. Edge network emerges as the times require. Data processing on the edge nodes can reduce the delay of data transmission and improve the response speed. In recent years, the need for artificial intelligence of edge network has been proposed. However, the data of a single, individual edge node is limited and does not satisfy the conditions of machine learning. Therefore, performing edge network machine learning under the premise of data confidentiality became a research hotspot. This paper proposes a Privacy-Preserving Asynchronous Federated Learning Mechanism for Edge Network Computing (PAFLM), which can allow multiple edge nodes to achieve more efficient federated learning without sharing their private data. Compared with the traditional distributed learning, the proposed method compresses the communications between nodes and parameter server during the training process without affecting the accuracy. Moreover, it allows the node to join or quit in any process of learning, which can be suitable to the scene with highly mobile edge devices.
2020
asynchronous distributed network; edge computing; Federated learning; gradient compression; privacy preservation
01 Pubblicazione su rivista::01a Articolo in rivista
Privacy-preserving asynchronous federated learning mechanism for edge network computing / Lu, X.; Liao, Y.; Lio, P.; Hui, P.. - In: IEEE ACCESS. - ISSN 2169-3536. - 8:(2020), pp. 48970-48981. [10.1109/ACCESS.2020.2978082]
File allegati a questo prodotto
File Dimensione Formato  
LuPrivacy-Preserving_2020.pdf

accesso aperto

Note: DOI10.1109/ACCESS.2020.2978082
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.25 MB
Formato Adobe PDF
1.25 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1719699
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 114
  • ???jsp.display-item.citation.isi??? 87
social impact