While the integration of product images enhances the recommendation performance of visual-based recommender systems (VRSs), this can make the model vulnerable to adversaries that can produce noised images capable to alter the recommendation behavior. Recently, stronger and stronger adversarial attacks have emerged to raise awareness of these risks; however, effective defense methods are still an urgent open challenge. In this work, we propose "Adversarial Image Denoiser" (AiD), a novel defense method that cleans up the item images by malicious perturbations. In particular, we design a training strategy whose denoising objective is to minimize both the visual differences between clean and adversarial images and preserve the ranking performance in authentic settings. We perform experiments to evaluate the efficacy of AiD using three state-of-the-art adversarial attacks mounted against standard VRSs. Code and datasets at https://github.com/sisinflab/Denoise-to-protect-VRS.
Denoise to Protect: A Method to Robustify Visual Recommenders from Adversaries / Antonio Merra, Felice; Walter Anelli, Vito; Di Noia, Tommaso; Malitesta, Daniele; Mancino, ALBERTO CARLO MARIA. - (2023), pp. 1924-1928. (Intervento presentato al convegno ACM International Conference on Research and Development in Information Retrieval tenutosi a Taipei; Taiwan) [10.1145/3539618.3591971].
Denoise to Protect: A Method to Robustify Visual Recommenders from Adversaries
Alberto Carlo Maria Mancino
2023
Abstract
While the integration of product images enhances the recommendation performance of visual-based recommender systems (VRSs), this can make the model vulnerable to adversaries that can produce noised images capable to alter the recommendation behavior. Recently, stronger and stronger adversarial attacks have emerged to raise awareness of these risks; however, effective defense methods are still an urgent open challenge. In this work, we propose "Adversarial Image Denoiser" (AiD), a novel defense method that cleans up the item images by malicious perturbations. In particular, we design a training strategy whose denoising objective is to minimize both the visual differences between clean and adversarial images and preserve the ranking performance in authentic settings. We perform experiments to evaluate the efficacy of AiD using three state-of-the-art adversarial attacks mounted against standard VRSs. Code and datasets at https://github.com/sisinflab/Denoise-to-protect-VRS.File | Dimensione | Formato | |
---|---|---|---|
Merra_Denoise_2023.pdf
accesso aperto
Note: https://doi.org/10.1145/3539618.3591971
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
1.45 MB
Formato
Adobe PDF
|
1.45 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.