Adversarial training is a prominent approach to make machine learning (ML) models resilient to adversarial examples. Unfortunately, such approach assumes the use of differentiable learning models, hence it cannot be applied to relevant ML techniques, such as ensembles of decision trees. In this paper, we generalize adversarial training to gradient-boosted decision trees (GBDTs). Our experiments show that the performance of classifiers based on existing learning techniques either sharply decreases upon attack or is unsatisfactory in absence of attacks, while adversarial training provides a very good trade-off between resiliency to attacks and accuracy in the unattacked setting.
Adversarial training of gradient-boosted decision trees / Calzavara, S.; Lucchese, C.; Tolomei, Gabriele. - (2019), pp. 2429-2432. (Intervento presentato al convegno 28th ACM International Conference on Information and Knowledge Management, CIKM 2019 tenutosi a Beijing; China) [10.1145/3357384.3358149].
Adversarial training of gradient-boosted decision trees
Tolomei, Gabriele
2019
Abstract
Adversarial training is a prominent approach to make machine learning (ML) models resilient to adversarial examples. Unfortunately, such approach assumes the use of differentiable learning models, hence it cannot be applied to relevant ML techniques, such as ensembles of decision trees. In this paper, we generalize adversarial training to gradient-boosted decision trees (GBDTs). Our experiments show that the performance of classifiers based on existing learning techniques either sharply decreases upon attack or is unsatisfactory in absence of attacks, while adversarial training provides a very good trade-off between resiliency to attacks and accuracy in the unattacked setting.File | Dimensione | Formato | |
---|---|---|---|
Calzavara_Adversarial-training_2019.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.2 MB
Formato
Adobe PDF
|
1.2 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.