CMStatistics 2017: Start Registration
View Submission - CFE
Title: Robust model selection: A review Authors:  Jennifer Castle - Oxford University (United Kingdom) [presenting]
David Hendry - University of Oxford (United Kingdom)
Abstract: Complete and correct specifications of models for observational data never exist, so model selection is unavoidable. The target of selection needs to be the process generating the data for the variables under analysis, while embedding the objective of the study, often a theory-based formulation. This requires starting from a sufficiently general initial specification that comprises all candidate variables, their lags in time-series data, and functional forms, allowing for possible outliers and shifts, seeking parsimonious final representations that retain the relevant information, are well specified, encompass alternative models, and evaluate the validity of the objective. Intrinsically, we seek robustness against many potential problems jointly: outliers, shifts, omitted variables, incorrect distributional shape, non-stationarity, mis-specified dynamics, and non-linearity, as well as inappropriate exogeneity assumptions. Our approach inevitably leads to more variables than observations, tackled by iteratively switching between contracting and expanding multi-path searches programmed in Autometrics. The steps involved are explained, specifically addressing indicator saturation to discriminate between outliers and large observations arising from non-linear responses. The analysis is illustrated using artificial data to evaluate outliers versus non-linearity and by a model of engine knock in which there is incorrectly recorded data, identifying shifts in relations.