Title: Achieving parsimony in Bayesian VARs with the horseshoe prior
Authors: Cindy Yu - Iowa State University (United States) [presenting]
Abstract: In the context of a vector autoregression (VAR) model, or any multivariate regression model, a large information set may be available from which to build a prediction equation. It is well known that forecasts based off of (un-penalized) least squares estimates can overfit the data and lead to poor predictions. Since the 1980's when the Minnesota prior was proposed, there have been many methods developed aiming at improving prediction performance. We propose the horseshoe prior in the context of a Bayesian VAR. The horseshoe prior is a unique shrinkage prior scheme in that shrinks irrelevant signals rigorously to 0 while allowing large signals to remain large and practically unshrunk. In an empirical study, we show that the horseshoe prior competes favorably with shrinkage schemes commonly used in Bayesian VAR models as well as with a prior that imposes true sparsity in the coefficient vector. Additionally, we propose the use of particle Gibbs with backwards simulation for the estimation of the time-varying volatility parameters.