Title: Specifying autoregressive processes: A horse race of frequentist model selection methods
Authors: Niels Aka - DIW Berlin, FU Berlin (Germany) [presenting]
Rolf Tschernig - Universitaet Regensburg (Germany)
Abstract: In a Monte Carlo simulation study, autoregressive processes are selected using three model selection methods: (i) standard information criteria, (ii) the model confidence set (MCS), (iii) jackknife model averaging (JMA). The autoregressive processes that create synthetic data have zero restrictions for some lags, thus instead of just testing for the maximum lag order, a full specification search is performed. Using standard criteria in this case poses a multiple testing problem and could therefore impair subsequent analyses. To assess the severity of this problem and the ability of more elaborate methods to address it, implications of each method's model choice are judged against the true model. In particular, multi-period forecasts and impulse response functions serve as testing grounds. The results indicate that in small samples JMA consistently produces better forecasts than standard criteria, while the MCS approach improves forecasts only at longer forecast horizons. For larger samples and fixed DGPs, model uncertainty becomes negligible and relying on standard criteria is a dominant strategy. Further extensions are in progress, namely the addition of penalized methods and the use of vector autoregressive moving average processes to create synthetic data.