Gradient-based attacks are a primary tool to evaluate robustness of machine-learning models. However, many attacks tend to provide overly-optimistic evaluations as they use fixed loss functions, optimizers, step-size schedulers, and default hyperparameters. In this work, we tackle these limitations by proposing a parametric variation of the well-known fast minimum-norm attack algorithm, whose loss, optimizer, step-size scheduler, and hyperparameters can be dynamically adjusted. We re-evaluate 12 robust models, showing that our attack finds smaller adversarial perturbations without requiring any additional tuning. This also enables reporting adversarial robustness as a function of the perturbation budget, providing a more complete evaluation than that offered by fixed-budget attacks, while remaining efficient. We release our open-source code at https://github.com/pralab/HO-FMN.

HO-FMN: Hyperparameter optimization for fast minimum-norm attacks / Mura, Raffaele; Floris, Giuseppe; Scionis, Luca; Piras, Giorgio; Pintor, Maura; Demontis, Ambra; Giacinto, Giorgio; Biggio, Battista; Roli, Fabio. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 616:February 2025(2025). [10.1016/j.neucom.2024.128918]

HO-FMN: Hyperparameter optimization for fast minimum-norm attacks

Luca Scionis
Co-primo
;
Giorgio Piras
Secondo
;
2025

Abstract

Gradient-based attacks are a primary tool to evaluate robustness of machine-learning models. However, many attacks tend to provide overly-optimistic evaluations as they use fixed loss functions, optimizers, step-size schedulers, and default hyperparameters. In this work, we tackle these limitations by proposing a parametric variation of the well-known fast minimum-norm attack algorithm, whose loss, optimizer, step-size scheduler, and hyperparameters can be dynamically adjusted. We re-evaluate 12 robust models, showing that our attack finds smaller adversarial perturbations without requiring any additional tuning. This also enables reporting adversarial robustness as a function of the perturbation budget, providing a more complete evaluation than that offered by fixed-budget attacks, while remaining efficient. We release our open-source code at https://github.com/pralab/HO-FMN.
2025
Machine Learning; Adversarial Machine Learning; Optimization
01 Pubblicazione su rivista::01a Articolo in rivista
HO-FMN: Hyperparameter optimization for fast minimum-norm attacks / Mura, Raffaele; Floris, Giuseppe; Scionis, Luca; Piras, Giorgio; Pintor, Maura; Demontis, Ambra; Giacinto, Giorgio; Biggio, Battista; Roli, Fabio. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 616:February 2025(2025). [10.1016/j.neucom.2024.128918]
File allegati a questo prodotto
File Dimensione Formato  
Mura_postprint_HO-FMN-Hyperparameter_2024.pdf

solo gestori archivio

Note: https://doi.org/10.1016/j.neucom.2024.128918
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 1.88 MB
Formato Adobe PDF
1.88 MB Adobe PDF   Contatta l'autore
Mura_preprint_HO-FMN-Hyperparameter.pdf

accesso aperto

Note: https://doi.org/10.1016/j.neucom.2024.128918
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.43 MB
Formato Adobe PDF
1.43 MB Adobe PDF
Mura_HO-FMN-Hyperparameter_2025.pdf

accesso aperto

Note: https://doi.org/10.1016/j.neucom.2024.128918
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 2.62 MB
Formato Adobe PDF
2.62 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1726979
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
social impact