A0178
Title: Distributed variable selection for sparse regression under memory constraints
Authors: Xuejun Jiang - Southern University of Science and Technology (China) [presenting]
Abstract: Variable selection is studied using the penalized likelihood method for distributed sparse regression with a large sample size $n$ under a limited memory constraint, where the memory of one machine can only store a subset of data. This is a much-needed problem to be solved in the big data era. A naive divide-and-conquer method of solving this problem is to split the whole data into $N$ parts and run each part on one of $N$ machines, aggregate the results from all machines via averaging, and finally obtain the selected variables. However, it tends to select more noise variables, and the false discovery rate may not be well controlled. We improve it by a special designed weighted average in aggregation. Compared with the alternating direction method of multiplier (ADMM) to deal with massive data in the literature, our proposed methods reduce the computational burden a lot and perform better by mean square error in most cases. Theoretically, we establish asymptotic properties of the resulting estimators for the likelihood models with a diverging number of parameters. Under some regularity conditions, we establish oracle properties in the sense that our distributed estimator shares the same asymptotic efficiency as the estimator based on the full sample. Computationally, a distributed penalized likelihood algorithm is proposed to refine the results in the context of general likelihoods. Furthermore, the proposed method is evaluated by simulations and a real example.