Title: Sparse sketches with small inversion bias via LEverage Score Sparsified (LESS) embeddings
Authors: Michal Derezinski - University of Michigan (United States) [presenting]
Abstract: Sketching is a way of constructing a small representation of a large data matrix by applying to it a random matrix. One of the simplest sketching methods is to use a matrix with i.i.d. Gaussian entries. This approach allows us to draw on the extensive statistical toolkit for working with Gaussian Designs, however, Gaussian sketches are often far too expensive to construct, whereas other more efficient sketching techniques do not retain many of their properties. We propose LEverage Score Sparsified (LESS) Embeddings, a sketching method that preserves the statistical properties of a Gaussian matrix, while retaining the efficiency of state-of-the-art approaches. LESS embeddings are a hybrid of two fast sketching techniques, Leverage Score Sampling and Sparse Embedding Matrices. While they are useful in a range of settings, our primary motivation here is the so-called Inversion Bias, which is a bottleneck when estimating quantities that involve the inverse covariance matrix of the data. Inversion bias arises for instance when using sketching in ordinary least squares, uncertainty quantification and optimal design of experiments. For Gaussian sketches we can easily correct this bias with a simple scaling. By adapting techniques from asymptotic random matrix theory, we show that the same bias correction also works for LESS Embeddings, which improves sketching approximation guarantees for a range of distributed estimation tasks.