B1176
Title: An adaptively weighted stochastic gradient MCMC algorithm for monte carlo simulation and global optimization
Authors: Wei Deng - Purdue University (United States) [presenting]
Guang Lin - Purdue University (United States)
Faming Liang - Purdue University (United States)
Abstract: An adaptively weighted stochastic gradient Langevin dynamics (AWSGLD) algorithm is proposed for Bayesian learning of big data problems. The proposed algorithm is scalable and possesses a self-adjusting mechanism: It adaptively flattens the high energy region and protrudes the low energy region during simulations such that both Monte Carlo simulation and global optimization tasks can be greatly facilitated in a single run. The self-adjusting mechanism enables the proposed algorithm to be essentially immune to local traps. Theoretically, by showing the stability of the mean-field system and verifying the existence and regularity properties of the solution of the Poisson equation, we establish the convergence of the AWSGLD algorithm, including both the convergence of the self-adapting parameters and the convergence of the weighted averaging estimators. Empirically, the AWSGLD algorithm is tested on multiple benchmark datasets including CIFAR100 and SVHN for both optimization and uncertainty estimation tasks. The numerical results indicate its great potential in Monte Carlo simulation and global optimization for modern machine learning tasks.