Title: Beyond optimal online variance estimation in time series
Authors: Kin Wai Chan - The Chinese University of Hong Kong (Hong Kong) [presenting]
Man Fung Leung - University of Illinois Urbana-Champaign (United States)
Abstract: The long-run variance (LRV) is an important quantity in the inference of dependent data. Recent advances in stochastic approximation show that online estimates of the LRV can be used to further improve learning algorithms. Nevertheless, existing `optimal' LRV estimators face an efficiency dilemma and do not align with practical interests. We develop a general framework that uniformly improves and accelerates any LRV estimators. The main contributions lie in three aspects. Statistically, we propose several principles that lead to an online estimator with super-optimal statistical efficiency as compared with the offline counterpart. We also derive the first sufficient condition for a general estimator to be updated in $O(1)$ time or space. Computationally, we introduce mini-batch estimation to accelerate any online estimators in practice. Implementation issues such as automatic optimal parameters selection and multivariate extension are discussed. Practically, we apply our estimators to convergence diagnostics in Markov chain Monte Carlo methods and learning rate tuning in stochastic gradient methods. Our experiments show that the finite-sample properties of our proposals are in line with the theoretical findings.