Title: Early stopping for gradient type algorithms
Authors: Yuting Wei - Stanford University (United States) [presenting]
Abstract: The behavior of boosting or gradient-type algorithms in non-parametric estimation will be discussed. While non-parametric models offer great flexibility, they can lead to overfitting and thus poor generalization performance. For this reason, procedures for fitting these models must involve some form of regularization. Although early-stopping of iterative algorithms is a widely-used form of regularization in statistics and optimization, it is less well-understood than its analogue based on penalized regularization. We will establish a direct connection between these two through a general bound of the excess risk for penalized M-estimators. Based on this new insight, we are able to give an explicit and optimal stopping criteria for boosting algorithms run in reproducing kernel Hilbert spaces which is standard in non-parametric estimation, and then generalize it to broader classes of functions.