Secure aggregation is a cryptographic protocol that securely computes the aggregation of its inputs. It is pivotal in keeping model updates private in federated learning. Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks. In this work, we show that a malicious server can easily elude secure aggregation as if the latter were not in place. We devise two different attacks capable of inferring information on individual private training datasets, independently of the number of users participating in the secure aggregation. This makes them concrete threats in large-scale, real-world federated learning applications. The attacks are generic and equally effective regardless of the secure aggregation protocol used They exploit a vulnerability of the federated learning protocol caused by incorrect usage of secure aggregation and lack of parameter validation. Our work demonstrates that current implementations of federated learning with secure aggregation offer only a ''false sense of security.''

Eluding Secure Aggregation in Federated Learning via Model Inconsistency / Pasquini, D.; Francati, D.; Ateniese, G.. - (2022), pp. 2429-2443. ( 28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022 Los Angeles; usa ) [10.1145/3548606.3560557].

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Pasquini D.
Primo
;
Francati D.
Secondo
;
Ateniese G.
2022

Abstract

Secure aggregation is a cryptographic protocol that securely computes the aggregation of its inputs. It is pivotal in keeping model updates private in federated learning. Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks. In this work, we show that a malicious server can easily elude secure aggregation as if the latter were not in place. We devise two different attacks capable of inferring information on individual private training datasets, independently of the number of users participating in the secure aggregation. This makes them concrete threats in large-scale, real-world federated learning applications. The attacks are generic and equally effective regardless of the secure aggregation protocol used They exploit a vulnerability of the federated learning protocol caused by incorrect usage of secure aggregation and lack of parameter validation. Our work demonstrates that current implementations of federated learning with secure aggregation offer only a ''false sense of security.''
2022
28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022
federated learning; model inconsistency; secure aggregation
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Eluding Secure Aggregation in Federated Learning via Model Inconsistency / Pasquini, D.; Francati, D.; Ateniese, G.. - (2022), pp. 2429-2443. ( 28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022 Los Angeles; usa ) [10.1145/3548606.3560557].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1749951
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 96
  • ???jsp.display-item.citation.isi??? ND
social impact