Title: Variable selection in distributed sparse regression under memory constraints
Authors: Xuejun Jiang - Southern University of Science and Technology (China) [presenting]
Abstract: Variable selection is studied using the penalized likelihood method for distributed sparse regression with large sample size $n$ under a limited memory constraint, where the memory of one machine can only store a subset of data. This is a much-needed research problem to be solved in the big data era. A naive divide-and-conquer method solving this problem is to split the whole data into $N$ parts and run each part on one of $N$ machines, aggregate the results from all machines via averaging, and finally obtain the selected variables. However, it tends to select more noise variables, and the false discovery rate may not be well controlled. We improve it by a special designed weighted average in aggregation. Theoretically, we establish asymptotic properties of the resulting estimators for the likelihood model with a diverging number of parameters. Under some regularity conditions, we establish oracle properties in the sense that our distributed estimator shares the same asymptotic efficiency as the estimator based on the full sample. A distributed penalized likelihood algorithm is proposed to refine the results in the context of general likelihoods. Furthermore, the proposed method is evaluated by simulations and a real example.