Machine-learned models are often perceived as "black boxes": they are given inputs and hopefully produce desired outputs. There are many circumstances, however, where human-interpretability is crucial to understand (i) why a model outputs a certain prediction on a given instance, (ii) which adjustable features of that instance should be modified, and finally (iii) how to alter a prediction when the mutated instance is input back to the model. In this paper, we present a technique that exploits the feedback loop originated from the internals of any ensemble of decision trees to offer recommendations for transforming a k-labelled predicted instance into a k'-labelled one (for any possible pair of class labels k, k'). Our proposed algorithm perturbs individual feature values of an instance, so as to change the original prediction output by the ensemble on the so-transformed instance. This is also achieved under two constraints: the cost and tolerance of transformation. Finally, we evaluate our approach on four distinct application domains: online advertising, healthcare, spam filtering, and handwritten digit recognition. Experiments confirm that our solution is able to suggest changes to feature values that help interpreting the rationale of model predictions, making it indeed useful in practice especially if implemented efficiently.
Generating actionable interpretations from ensembles of decision trees / Tolomei, Gabriele; Silvestri, Fabrizio. - In: IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING. - ISSN 1041-4347. - 33:4(2021), pp. 1540-1553. [10.1109/TKDE.2019.2945326]
Generating actionable interpretations from ensembles of decision trees
Tolomei, Gabriele
;Silvestri, Fabrizio
2021
Abstract
Machine-learned models are often perceived as "black boxes": they are given inputs and hopefully produce desired outputs. There are many circumstances, however, where human-interpretability is crucial to understand (i) why a model outputs a certain prediction on a given instance, (ii) which adjustable features of that instance should be modified, and finally (iii) how to alter a prediction when the mutated instance is input back to the model. In this paper, we present a technique that exploits the feedback loop originated from the internals of any ensemble of decision trees to offer recommendations for transforming a k-labelled predicted instance into a k'-labelled one (for any possible pair of class labels k, k'). Our proposed algorithm perturbs individual feature values of an instance, so as to change the original prediction output by the ensemble on the so-transformed instance. This is also achieved under two constraints: the cost and tolerance of transformation. Finally, we evaluate our approach on four distinct application domains: online advertising, healthcare, spam filtering, and handwritten digit recognition. Experiments confirm that our solution is able to suggest changes to feature values that help interpreting the rationale of model predictions, making it indeed useful in practice especially if implemented efficiently.File | Dimensione | Formato | |
---|---|---|---|
Tolomei_postprint_generating_2019.pdf
accesso aperto
Note: DOI: 10.1109/TKDE.2019.2945326
Tipologia:
Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
3.29 MB
Formato
Adobe PDF
|
3.29 MB | Adobe PDF | |
Tolomei_generating_2021.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
2.36 MB
Formato
Adobe PDF
|
2.36 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.