The unprecedented availability of training data fueled the rapid development of powerful neural networks in recent years. However, the need for such large amounts of data leads to potential threats such as poisoning attacks: adversarial manipulations of the training data aimed at compromising the learned model to achieve a given adversarial goal. This paper investigates defenses against clean-label poisoning attacks and proposes a novel approach to detect and filter poisoned datapoints in the transfer learning setting. We define a new characteristic vector representation of datapoints and show that it effectively captures the intrinsic properties of the data distribution. Through experimental analysis, we demonstrate that effective poison datapoints can be successfully differentiated from clean datapoints in the characteristic vector space. We thoroughly evaluate our proposed approach and compare it to existing state-of-the-art defenses using multiple architectures, datasets, and poison budgets. Our evaluation shows that our proposal outperforms existing approaches in defense rate and final trained model performance across all experimental settings.

Have You Poisoned My Data? Defending Neural Networks Against Data Poisoning / De Gaspari, F.; Hitaj, D.; Mancini, L. V.. - 14982:(2024), pp. 85-104. (Intervento presentato al convegno European Symposium on Research in Computer Security tenutosi a Bydgoszcz, Poland) [10.1007/978-3-031-70879-4_5].

Have You Poisoned My Data? Defending Neural Networks Against Data Poisoning

De Gaspari F.
Primo
;
Hitaj D.
Secondo
;
Mancini L. V.
Ultimo
2024

Abstract

The unprecedented availability of training data fueled the rapid development of powerful neural networks in recent years. However, the need for such large amounts of data leads to potential threats such as poisoning attacks: adversarial manipulations of the training data aimed at compromising the learned model to achieve a given adversarial goal. This paper investigates defenses against clean-label poisoning attacks and proposes a novel approach to detect and filter poisoned datapoints in the transfer learning setting. We define a new characteristic vector representation of datapoints and show that it effectively captures the intrinsic properties of the data distribution. Through experimental analysis, we demonstrate that effective poison datapoints can be successfully differentiated from clean datapoints in the characteristic vector space. We thoroughly evaluate our proposed approach and compare it to existing state-of-the-art defenses using multiple architectures, datasets, and poison budgets. Our evaluation shows that our proposal outperforms existing approaches in defense rate and final trained model performance across all experimental settings.
2024
European Symposium on Research in Computer Security
cybersecurity; data poisoning; neural networks
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Have You Poisoned My Data? Defending Neural Networks Against Data Poisoning / De Gaspari, F.; Hitaj, D.; Mancini, L. V.. - 14982:(2024), pp. 85-104. (Intervento presentato al convegno European Symposium on Research in Computer Security tenutosi a Bydgoszcz, Poland) [10.1007/978-3-031-70879-4_5].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1721778
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact