CMStatistics 2020: Start Registration
View Submission - CMStatistics
Title: Computationally efficient sparse clustering Authors:  Matthias Loeffler - ETH Zurich (Switzerland) [presenting]
Alexander Wein - New York University (United States)
Afonso Bandeira - ETH Zurich (Switzerland)
Abstract: Statistical and computational limits of clustering are studied when the means of the centres are sparse, and their dimension is possibly much larger than the sample size. Our theoretical analysis focuses on the simple model $X_i=z_i \theta+\varepsilon_i, ~z_i \in \{-1,1\}, ~\varepsilon_i \thicksim \mathcal{N}(0,I_p)$ which has two clusters with centres $\theta$ and $-\theta$. We provide a finite sample analysis of a new sparse clustering algorithm based on sparse PCA and show that it achieves the minimax optimal misclustering rate in the regime $\|\theta \| \rightarrow \infty$, matching asymptotically the Bayes error. The results require the sparsity to grow slower than the square root of the sample size. Using a recent framework for computational lower bounds---the low-degree likelihood ratio---we give evidence that this condition is necessary for any polynomial-time clustering algorithm to succeed below the BBP threshold. This complements existing evidence based on reductions and statistical query lower bounds. Compared to these existing results, we cover a wider set of parameter regimes and give a more precise understanding of the runtime required and the misclustering error achievable. We also discuss extensions of our results to more than two clusters.