Fake news detection models are critical to countering disinformation but can be manipulated through adversarial attacks. In this position paper, we analyze how an attacker can compromise the performance of an online learning detector on specific news content without being able to manipulate the original target news. In some contexts, such as social networks, where the attacker cannot exert complete control over all the information, this scenario can indeed be quite plausible. Therefore, we show how an attacker could potentially introduce poisoning data into the training data to manipulate the behavior of an online learning method. Our initial findings reveal varying susceptibility of logistic regression models based on complexity and attack type.

Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News Without Modifying it / Siciliano, Federico; Maiano, Luca; Papa, Lorenzo; Baccini, Federica; Amerini, Irene; Silvestri, Fabrizio. - (2025), pp. 525-530. (Intervento presentato al convegno Joint European Conference on Machine Learning and Knowledge Discovery in Databases tenutosi a Torino; Italia) [10.1007/978-3-031-74627-7_44].

Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News Without Modifying it

Federico Siciliano
Methodology
;
Luca Maiano
Methodology
;
Lorenzo Papa
Methodology
;
Federica Baccini
Writing – Review & Editing
;
Irene Amerini
Supervision
;
Fabrizio Silvestri
Supervision
2025

Abstract

Fake news detection models are critical to countering disinformation but can be manipulated through adversarial attacks. In this position paper, we analyze how an attacker can compromise the performance of an online learning detector on specific news content without being able to manipulate the original target news. In some contexts, such as social networks, where the attacker cannot exert complete control over all the information, this scenario can indeed be quite plausible. Therefore, we show how an attacker could potentially introduce poisoning data into the training data to manipulate the behavior of an online learning method. Our initial findings reveal varying susceptibility of logistic regression models based on complexity and attack type.
2025
Joint European Conference on Machine Learning and Knowledge Discovery in Databases
Misinformation; Data Poisoning; Online Learning.
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News Without Modifying it / Siciliano, Federico; Maiano, Luca; Papa, Lorenzo; Baccini, Federica; Amerini, Irene; Silvestri, Fabrizio. - (2025), pp. 525-530. (Intervento presentato al convegno Joint European Conference on Machine Learning and Knowledge Discovery in Databases tenutosi a Torino; Italia) [10.1007/978-3-031-74627-7_44].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1731395
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact