B0785
Title: Flexible Bayesian nonlinear model configuration
Authors: Florian Frommlet - Medical University Vienna (Austria) [presenting]
Geir Olve Storvik - University of Oslo (Norway)
Aliaksandr Hubin - NMBU (Norway)
Abstract: Regression models are used in a wide range of applications, but simple nonlinear models are often not sufficient to describe complex relationships. For large data sets, neural networks have become increasingly popular for prediction tasks, but they provide less interpretable models and suffer from potential overfitting. Alternatively, nonlinear regression might be used, but the correct specification of such models is, in general, difficult. We introduce a method to construct nonlinear parametric regression models. Nonlinear features are generated hierarchically, similarly to deep learning, but even slightly more general. This amount of flexibility is combined with Bayesian variable selection, where model priors are chosen to penalize the complexity of nonlinear features. As a consequence, we end up with highly interpretable non-linear models selected from an extremely flexible model space. A genetically modified mode jumping Markov chain Monte Carlo algorithm is adopted to perform Bayesian inference and estimate model posterior probabilities. We illustrate in various applications that our algorithm is capable of delivering meaningful nonlinear models. Additionally, we compare its predictive performance with several machine learning algorithms. Finally, we hint at possible extensions for future work.