Title: Initial estimators for regularized robust methods in high-dimensional settings
Authors: David Kepplinger - University of British Columbia (Canada) [presenting]
Matias Salibian-Barrera - The University of British Columbia (Canada)
Gabriela Cohen Freue - University of British Columbia (Canada)
Abstract: Many robust methods involve the minimization of a non-convex function, thus the initial value for the optimization algorithm is of particular important to attain a good solution. We compare the performance of initial estimators for regularized methods when there are more parameters than observations. Although random subsampling is the prevalent approach to get initial estimates, deterministic initial estimates are increasingly popular alternatives. With regularized methods, the number of subsamples does not have to increase with the dimensionality of the problem, but many subsamples must be considered nevertheless. Other methods generate subsamples in more informed ways to reduce the number of subsamples. We consider Principal Sensitivity Components-generalized to regularized estimators for high-dimensional problems-to guide the search for good subsamples. Since regularized estimators are generally computed over a grid of penalty values, a potential estimator to start the optimization at a given penalty level is the when there are to a previous optimization with a very similar degree of regularization. This includes the particularly useful approach to compute the regularization path starting from an all-zero coefficient and gradually relaxing the penalty, which has the benefit of avoiding a cold start. We compare these different initial estimates for robust regularized S-estimates of regression in terms of the attained objective function, sparsity, and computational speed.