CMStatistics 2022: Start Registration
View Submission - CMStatistics
B1910
Title: Understanding deep neural network via statistical regression modelling approaches Authors:  Il Do Ha - Pukyong National University (Korea, South) [presenting]
Kevin Burke - University of Limerick (Ireland)
Youngjo Lee - Seoul National University (Korea, South)
Abstract: Deep learning (DL) has recently provided breakthrough results for prediction problems, including classification for a wide variety of applications. In particular, the core architectures that currently dominate the DL are deep feed-forward neural networks (DNN), CNN, RNN, LSTM, AE, GAN and Transformer, etc. The DNN models are represented as structured neural networks consisting of three layers (input, hidden and output layers) for constructing (or modelling) the functional relationship (mainly nonlinear) between input and output variables. The main goal is to find a nonlinear predictor of the output $Y$ given the input $X$. The output models of DNN can be expressed as structured mean models, leading that the estimation of such a mean provides the prediction of $Y$. It is thus interesting to study the DNN from a statistical perspective. The DNN models can be viewed as a highly nonlinear and semi-parametric generalization of statistical regression models such as the generalized linear model (GLM). The fitting (i.e. learning or training) of DNN models based on train data is usually implemented using likelihood-based methods, which are very useful for the construction of loss function or regularization. We present how to understand the DNN models via the GLM framework, and then extend this perspective to survival models allowing for censoring and also to random-effect models, with simulations and practical examples.