Pre-trainedCNNmodels are frequently employed for a variety ofmachine learning tasks, including visual recognition and recommendation. We are interested in examining the application of attacks generated by adversarial machine learning techniques to the vertical domain of fashion and retail products. Specifically, the present work focuses on the robustness of cutting-edge CNN models against state-of-the-art adversarial machine learning attacks that have shown promising performance in general visual classification tasks. In order to achieve this objective, we conducted adversarial experiments on two prominent fashion-related tasks: visual clothing classification and outfit recommendation. Large-scale experimental validation of the fashion category classification task on a real dataset of PolyVore consisting of various outfits reveals that ResNet50 is one of the most resilient networks for the fashion categorization task, whereas DenseNet169 and MobileNetV2 are the most vulnerable. Performance-wise however, DenseNet169 is the most time-consuming network to attack. However, the results of the outfit recommendation task were somewhat unexpected. In both of the push or nuke attack scenarios and altogether, itwas demonstrated that adversarial attacks were unable to degrade the quality of outfit recommenders. The only exception was the more complicated adversarial attack of DeepFool, which could only weaken the quality of visual recommenders at large attack budget (epsilon) values. Numerous explanations could be provided for this phenomenon, which can be attributed to the fact that a collection of adversarially perturbed images can nonetheless appear pleasing to the human eye. This may possibly be a result of the greater image sizes in the selected dataset. Overall, the results of this study are intriguing and encourage more studies in the field of adversarial attacks and fashion recommendation system security.

Adversarial Attacks Against Visually Aware Fashion Outfit Recommender Systems / Attimonelli, M; Amatulli, G; Di Gioia, L; Malitesta, D; Deldjoo, Y; Di Noia, T. - 981:(2022), pp. 63-78. (Intervento presentato al convegno 16th ACM Conference on Recommender Systems tenutosi a Seattle, WA, USA) [10.1007/978-3-031-22192-7_4].

Adversarial Attacks Against Visually Aware Fashion Outfit Recommender Systems

Attimonelli, M
Primo
;
2022

Abstract

Pre-trainedCNNmodels are frequently employed for a variety ofmachine learning tasks, including visual recognition and recommendation. We are interested in examining the application of attacks generated by adversarial machine learning techniques to the vertical domain of fashion and retail products. Specifically, the present work focuses on the robustness of cutting-edge CNN models against state-of-the-art adversarial machine learning attacks that have shown promising performance in general visual classification tasks. In order to achieve this objective, we conducted adversarial experiments on two prominent fashion-related tasks: visual clothing classification and outfit recommendation. Large-scale experimental validation of the fashion category classification task on a real dataset of PolyVore consisting of various outfits reveals that ResNet50 is one of the most resilient networks for the fashion categorization task, whereas DenseNet169 and MobileNetV2 are the most vulnerable. Performance-wise however, DenseNet169 is the most time-consuming network to attack. However, the results of the outfit recommendation task were somewhat unexpected. In both of the push or nuke attack scenarios and altogether, itwas demonstrated that adversarial attacks were unable to degrade the quality of outfit recommenders. The only exception was the more complicated adversarial attack of DeepFool, which could only weaken the quality of visual recommenders at large attack budget (epsilon) values. Numerous explanations could be provided for this phenomenon, which can be attributed to the fact that a collection of adversarially perturbed images can nonetheless appear pleasing to the human eye. This may possibly be a result of the greater image sizes in the selected dataset. Overall, the results of this study are intriguing and encourage more studies in the field of adversarial attacks and fashion recommendation system security.
2022
16th ACM Conference on Recommender Systems
Adversarial; Attack; Fashion; Recommender systems
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Adversarial Attacks Against Visually Aware Fashion Outfit Recommender Systems / Attimonelli, M; Amatulli, G; Di Gioia, L; Malitesta, D; Deldjoo, Y; Di Noia, T. - 981:(2022), pp. 63-78. (Intervento presentato al convegno 16th ACM Conference on Recommender Systems tenutosi a Seattle, WA, USA) [10.1007/978-3-031-22192-7_4].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1690767
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact