Fairness in Federated Learning (FL) is imperative not only for the ethical utilization of technology but also for ensuring that models provide accurate, equitable, and beneficial outcomes across varied user demographics and equipment. This paper proposes a new adversarial architecture, referred to as Adversarial Graph Attention Network (AGAT), which deliberately instigates fairness attacks with an aim to bias the learning process across the FL. The proposed AGAT is developed to synthesize malicious, biasing model updates, where the minimum of Kullback-Leibler (KL) divergence between the user's model update and the global model is maximized. Due to a limited set of labeled input-output biasing data samples, a surrogate model is created, which presents the behavior of a complex malicious model update. Moreover, a graph autoencoder (GAE) is designed within the AGAT architecture, which is trained together with sub- gradient descent to reconstruct manipulatively the correlations of the model updates, and maximize the reconstruction loss while keeping the malicious, biasing model updates undetectable. The proposed AGAT attack is implemented in PyTorch, showing experimentally that AGAT successfully increases the minimum value of KL divergence of benign model updates by 60.9% and bypasses the detection of existing defense models. The source code of the AGAT attack is released on GitHub.

Biasing Federated Learning With a New Adversarial Graph Attention Network / Li, K.; Zheng, J.; Ni, W.; Huang, H.; Lio, P.; Dressler, F.; Akan, O. B.. - In: IEEE TRANSACTIONS ON MOBILE COMPUTING. - ISSN 1536-1233. - (2024). [10.1109/TMC.2024.3499371]

Biasing Federated Learning With a New Adversarial Graph Attention Network

Lio P.
;
2024

Abstract

Fairness in Federated Learning (FL) is imperative not only for the ethical utilization of technology but also for ensuring that models provide accurate, equitable, and beneficial outcomes across varied user demographics and equipment. This paper proposes a new adversarial architecture, referred to as Adversarial Graph Attention Network (AGAT), which deliberately instigates fairness attacks with an aim to bias the learning process across the FL. The proposed AGAT is developed to synthesize malicious, biasing model updates, where the minimum of Kullback-Leibler (KL) divergence between the user's model update and the global model is maximized. Due to a limited set of labeled input-output biasing data samples, a surrogate model is created, which presents the behavior of a complex malicious model update. Moreover, a graph autoencoder (GAE) is designed within the AGAT architecture, which is trained together with sub- gradient descent to reconstruct manipulatively the correlations of the model updates, and maximize the reconstruction loss while keeping the malicious, biasing model updates undetectable. The proposed AGAT attack is implemented in PyTorch, showing experimentally that AGAT successfully increases the minimum value of KL divergence of benign model updates by 60.9% and bypasses the detection of existing defense models. The source code of the AGAT attack is released on GitHub.
2024
Adversarial graph attention network; cyberattacks; fairness; feature correlations; federated learning
01 Pubblicazione su rivista::01a Articolo in rivista
Biasing Federated Learning With a New Adversarial Graph Attention Network / Li, K.; Zheng, J.; Ni, W.; Huang, H.; Lio, P.; Dressler, F.; Akan, O. B.. - In: IEEE TRANSACTIONS ON MOBILE COMPUTING. - ISSN 1536-1233. - (2024). [10.1109/TMC.2024.3499371]
File allegati a questo prodotto
File Dimensione Formato  
Li_Biasing_2024.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 13.53 MB
Formato Adobe PDF
13.53 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1728758
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact