Title: Machine learning insights from sparse exchangeable graphs
Authors: Victor Veitch - Columbia University (United States) [presenting]
Ekansh Sharma - University of Toronto (Canada)
Zacharie Naulet - University of Toronto (Canada)
Daniel Roy - University of Toronto (Canada)
Abstract: Many tasks in machine learning, e.g. matrix factorization, topic modeling, and feature allocation, can be viewed as learning the parameters of a probability distribution over bipartite graphs. Recent work has introduced the sparse exchangeable (graphex) models as a new family of probability distributions over graphs (in particular, over bipartite graphs). These models offer natural generalizations of many popular approaches to machine learning tasks. Thus the sparse exchangeable models and the associated theory have immediate import for machine learning. We explain some practical aspects of this connection, with a particular emphasis on the role of subsampling. Further, we introduce sparse exchangeable non-negative matrix factorization as an extended example, which is of interest in its own right.