To select a forecast model among competing models, researchers often use ex-ante prediction experiments over training samples. Following Diebold and Mariano (1995), forecasters routinely evaluate the relative performance of competing models with accuracy tests and may base their selection on test significance on top of comparing forecast errors. With extensive Monte Carlo analysis, we investigated whether this practice favors simpler models over more complex ones, without gains in forecast accuracy. We simulated the autoregressive moving-average model, the self-exciting threshold autoregressive model, and vector autoregression. We considered two variants of the Diebold–Mariano test, the test by Giacomini and White (2006), the F -test by Clark and McCracken (2001), the Akaike information criterion, and a pure training-sample evaluation. The findings showed some accuracy gains for small samples when applying accuracy tests, particularly for the Clark–McCracken and bootstrapped Diebold–Mariano tests. Evidence against this testing procedure dominated, however, and training-sample evaluations without accuracy tests performed best in many cases.

On using predictive-ability tests in the selection of time-series prediction models: A Monte Carlo evaluation / Costantini, Mauro; Kunst, Robert M.. - In: INTERNATIONAL JOURNAL OF FORECASTING. - ISSN 0169-2070. - 37:(2021), pp. 445-460. [10.1016/j.ijforecast.2020.06.010]

On using predictive-ability tests in the selection of time-series prediction models: A Monte Carlo evaluation

Mauro Costantini;
2021

Abstract

To select a forecast model among competing models, researchers often use ex-ante prediction experiments over training samples. Following Diebold and Mariano (1995), forecasters routinely evaluate the relative performance of competing models with accuracy tests and may base their selection on test significance on top of comparing forecast errors. With extensive Monte Carlo analysis, we investigated whether this practice favors simpler models over more complex ones, without gains in forecast accuracy. We simulated the autoregressive moving-average model, the self-exciting threshold autoregressive model, and vector autoregression. We considered two variants of the Diebold–Mariano test, the test by Giacomini and White (2006), the F -test by Clark and McCracken (2001), the Akaike information criterion, and a pure training-sample evaluation. The findings showed some accuracy gains for small samples when applying accuracy tests, particularly for the Clark–McCracken and bootstrapped Diebold–Mariano tests. Evidence against this testing procedure dominated, however, and training-sample evaluations without accuracy tests performed best in many cases.
2021
Forecasting Time series Predictive accuracy Model selection Monte Carlo simulation
01 Pubblicazione su rivista::01a Articolo in rivista
On using predictive-ability tests in the selection of time-series prediction models: A Monte Carlo evaluation / Costantini, Mauro; Kunst, Robert M.. - In: INTERNATIONAL JOURNAL OF FORECASTING. - ISSN 0169-2070. - 37:(2021), pp. 445-460. [10.1016/j.ijforecast.2020.06.010]
File allegati a questo prodotto
File Dimensione Formato  
Costantini_Kunst_IJF_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 556.82 kB
Formato Adobe PDF
556.82 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1704410
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 4
social impact