This paper studies adversarial attacks and defences against deep learning models trained on infrared data to classify the presence of humans and detect their bounding boxes, which differently from the standard RGB case is an open research problem with multiple consequences related to safety and secure artificial intelligence applications. The paper has two major contributions. Firstly, we study the effectiveness of the Projected Gradient Descent (PGD) adversarial attack against Convolutional Neural Networks (CNNs) trained exclusively on infrared data, and the effectiveness of adversarial training as a possible defense against the attack. Secondly, we study the response of an object detection model trained on infrared images under adversarial attacks. In particular, we propose and empirically evaluate two attacks: one classical attack from the literature on object detection, and a new hybrid attack which exploits a common CNN base architecture of the classifier and the object detector. We show for the first time that adversarial attacks weaken the performance of classification and detection models trained on infrared images only. We also prove that the defense adversarial training optimized for the infinity norm increases the robustness of different classification models trained on infrared data.

Evaluating Adversarial Attacks and Defences in Infrared Deep Learning Monitoring Systems / Spasiano, F; Gennaro, G; Scardapane, S. - (2022), pp. 1-6. (Intervento presentato al convegno 2022 International Joint Conference on Neural Networks (IJCNN) tenutosi a Padova (Italy)) [10.1109/IJCNN55064.2022.9891997].

Evaluating Adversarial Attacks and Defences in Infrared Deep Learning Monitoring Systems

Spasiano, F;Scardapane, S
2022

Abstract

This paper studies adversarial attacks and defences against deep learning models trained on infrared data to classify the presence of humans and detect their bounding boxes, which differently from the standard RGB case is an open research problem with multiple consequences related to safety and secure artificial intelligence applications. The paper has two major contributions. Firstly, we study the effectiveness of the Projected Gradient Descent (PGD) adversarial attack against Convolutional Neural Networks (CNNs) trained exclusively on infrared data, and the effectiveness of adversarial training as a possible defense against the attack. Secondly, we study the response of an object detection model trained on infrared images under adversarial attacks. In particular, we propose and empirically evaluate two attacks: one classical attack from the literature on object detection, and a new hybrid attack which exploits a common CNN base architecture of the classifier and the object detector. We show for the first time that adversarial attacks weaken the performance of classification and detection models trained on infrared images only. We also prove that the defense adversarial training optimized for the infinity norm increases the robustness of different classification models trained on infrared data.
2022
2022 International Joint Conference on Neural Networks (IJCNN)
deep learning; infrared; adversarial attack; adversarial training
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Evaluating Adversarial Attacks and Defences in Infrared Deep Learning Monitoring Systems / Spasiano, F; Gennaro, G; Scardapane, S. - (2022), pp. 1-6. (Intervento presentato al convegno 2022 International Joint Conference on Neural Networks (IJCNN) tenutosi a Padova (Italy)) [10.1109/IJCNN55064.2022.9891997].
File allegati a questo prodotto
File Dimensione Formato  
Spasiano_Evaluating_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.12 MB
Formato Adobe PDF
2.12 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1671583
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact