Title: Simple non-linear shrinkage estimators for large-dimensional covariance matrices
Authors: Nicolas Tavernier - KU Leuven (Belgium) [presenting]
Geert Dhaene - KU Leuven (Belgium)
Abstract: An optimal rule is derived for shrinking large-dimensional sample covariance matrices under Frobenius loss. The rule generalizes Ledoit and Wolf's optimal linear shrinkage rule to broader parametric families of rules. The families include, for example, piecewise linear, spline, and polynomial rules. The oracle version of the optimal rule is very simple and attains the lower bound on the Frobenius loss in finite samples. A feasible version is derived and approximates the lower bound under large-dimensional asymptotics where $p/n\rightarrow c>0$. In settings that have been studied earlier, non-linear shrinkage is found to substantially reduce the Frobenius loss compared to linear shrinkage. Non-linear shrinkage is conceptually easy, does not require non-convex optimization in high dimension, and allows $p>n$.