Autoencoders are dimension reduction models in the field of machine learning which can be thought of as a neural network counterpart of principal components analysis (PCA). Due to their flexibility and good performance, autoencoders have been recently used for estimating nonlinear factor models in finance. The main weakness of autoencoders is that the results are less explainable than those obtained with the PCA. In this paper, we propose the adoption of the Shapley value to improve the explainability of autoencoders in the context of nonlinear factor models. In particular, we measure the relevance of nonlinear latent factors using a forecast-based Shapley value approach that measures each latent factor’s contributions in determining the out-of-sample accuracy in factor-augmented models. Considering the interesting empirical instance of the commodity market, we identify the most relevant latent factors for each commodity based on their out-of-sample forecasting ability.

Improving the explainability of autoencoder factors for commodities through forecast-based Shapley values / Cerqueti, Roy; Iovanella, Antonio; Mattera, Raffaele; Storani, Saverio. - In: SCIENTIFIC REPORTS. - ISSN 2045-2322. - 14:1(2024). [10.1038/s41598-024-70342-5]

Improving the explainability of autoencoder factors for commodities through forecast-based Shapley values

Cerqueti, Roy;Mattera, Raffaele;Storani, Saverio
2024

Abstract

Autoencoders are dimension reduction models in the field of machine learning which can be thought of as a neural network counterpart of principal components analysis (PCA). Due to their flexibility and good performance, autoencoders have been recently used for estimating nonlinear factor models in finance. The main weakness of autoencoders is that the results are less explainable than those obtained with the PCA. In this paper, we propose the adoption of the Shapley value to improve the explainability of autoencoders in the context of nonlinear factor models. In particular, we measure the relevance of nonlinear latent factors using a forecast-based Shapley value approach that measures each latent factor’s contributions in determining the out-of-sample accuracy in factor-augmented models. Considering the interesting empirical instance of the commodity market, we identify the most relevant latent factors for each commodity based on their out-of-sample forecasting ability.
2024
explainability; neural networks; nonlinear factor models; Shapley value; commodities
01 Pubblicazione su rivista::01a Articolo in rivista
Improving the explainability of autoencoder factors for commodities through forecast-based Shapley values / Cerqueti, Roy; Iovanella, Antonio; Mattera, Raffaele; Storani, Saverio. - In: SCIENTIFIC REPORTS. - ISSN 2045-2322. - 14:1(2024). [10.1038/s41598-024-70342-5]
File allegati a questo prodotto
File Dimensione Formato  
Scirep Storani Mattera Iovanella.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.32 MB
Formato Adobe PDF
2.32 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1721702
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact