CMStatistics 2021: Start Registration
View Submission - CMStatistics
B1749
Title: Stability-driven interpretation of deep learning models: A neuroscience case study Authors:  Reza Abbasi Asl - University of California, San Francisco (United States) [presenting]
Abstract: In the past decade, research in machine learning has been exceedingly focused on the development of algorithms and models with remarkably high predictive capabilities. These predictive models have wide applications in large-scale data-driven domains including neuroscience, healthcare, and computer vision. However, interpreting these models still remains a challenge, primarily because of the large number of parameters involved. We will introduce two frameworks based on (1) stability and (2) compression to build more interpretable machine learning models. These two frameworks will be demonstrated in the context of a computational neuroscience study. First, we will introduce a stability-driven visualization framework for models based on neural networks. This framework is successful in characterizing complex biological neurons in the mouse and non-human primate visual cortex. This visualization uncovers the diversity of stable patterns explained by neurons. Then, we will discuss two neural network compression techniques based on iterative pruning and low dimensional decomposition of filters. These model compression techniques increase the interpretability of networks while retaining the high accuracy and diversity of filters. The compressed models give rise to a new set of accurate models for neurons but with much simpler structures.