Recent research has found that neural networks for computer vision are vulnerable to several types of external attacks that modify the input of the model, with the malicious intent of producing a misclassification. With the increase in the number of feasible attacks, many defence approaches have been proposed to mitigate the effect of these attacks and protect the models. Mainly, the research on both attack and defence has focused on RGB images, while other domains, such as the infrared domain, are currently underexplored. In this paper, we propose two attacks, and we evaluate them on multiple datasets and neural network models, showing that the results outperform others established attacks, on both RGB as well as infrared domains. In addition, we show that our proposal can be used in an adversarial training protocol to produce more robust models, with respect to both adversarial attacks and natural perturbations that can be applied to input images. Lastly, we study if a successful attack in a domain can be transferred to an aligned image in another domain, without any further tuning.

Rearranging Pixels is a Powerful Black-Box Attack for RGB and Infrared Deep Learning Models / Pomponi, Jary; Dantoni, Daniele; Alessandro, Nicolosi; Scardapane, Simone. - In: IEEE ACCESS. - ISSN 2169-3536. - 11:(2023), pp. 11298-11306. [10.1109/ACCESS.2023.3241360]

Rearranging Pixels is a Powerful Black-Box Attack for RGB and Infrared Deep Learning Models

Jary Pomponi
;
Simone Scardapane
2023

Abstract

Recent research has found that neural networks for computer vision are vulnerable to several types of external attacks that modify the input of the model, with the malicious intent of producing a misclassification. With the increase in the number of feasible attacks, many defence approaches have been proposed to mitigate the effect of these attacks and protect the models. Mainly, the research on both attack and defence has focused on RGB images, while other domains, such as the infrared domain, are currently underexplored. In this paper, we propose two attacks, and we evaluate them on multiple datasets and neural network models, showing that the results outperform others established attacks, on both RGB as well as infrared domains. In addition, we show that our proposal can be used in an adversarial training protocol to produce more robust models, with respect to both adversarial attacks and natural perturbations that can be applied to input images. Lastly, we study if a successful attack in a domain can be transferred to an aligned image in another domain, without any further tuning.
2023
adversarial attack ; neural networks; random search; differential evolution
01 Pubblicazione su rivista::01a Articolo in rivista
Rearranging Pixels is a Powerful Black-Box Attack for RGB and Infrared Deep Learning Models / Pomponi, Jary; Dantoni, Daniele; Alessandro, Nicolosi; Scardapane, Simone. - In: IEEE ACCESS. - ISSN 2169-3536. - 11:(2023), pp. 11298-11306. [10.1109/ACCESS.2023.3241360]
File allegati a questo prodotto
File Dimensione Formato  
Pomponi_Rearranging_2023.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.53 MB
Formato Adobe PDF
1.53 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1669282
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact