Title: Optimality and superiority of deep learning for estimating functions in variants of Besov spaces
Authors: Taiji Suzuki - University of Tokyo / RIKEN-AIP (Japan) [presenting]
Atsushi Nitanda - Kyushu Institute of Technology (Japan)
Kazuma Tsuji - The University of Tokyo (Japan)
Abstract: Deep learning has exhibited superior performance for various tasks. To understand this property, we investigate the approximation and estimation ability of deep learning on some variants of Besov spaces, such as anisotropic Besov space and variable exponent Besov space. The anisotropic Besov space is characterized by direction-dependent smoothness and includes several function classes investigated thus far. We demonstrate that the approximation error and estimation error of deep learning only depend on the average value of the smoothness parameters in all directions. Consequently, the curse of dimensionality can be avoided if the smoothness of the target function is highly anisotropic. Unlike existing studies, our analysis does not require a low-dimensional structure of the input data. We also investigate the minimax optimality of deep learning and compare its performance with that of the kernel method (more generally, linear estimators). The results show that deep learning has a better dependence on the input dimensionality if the target function possesses anisotropic smoothness and it achieves an adaptive rate for functions with spatially inhomogeneous smoothness. Finally, we also discuss the learning ability of deep learning in variable exponent Besov spaces. We will show that deep learning also adapts in that situation and achieves a better rate than linear estimators.