Title: Bayesian integration of multi-modal imaging data and efficient inference with random data compression
Authors: Rajarshi Guhaniyogi - Texas A & M university (United States) [presenting]
Aaron Scheffler - University of California, San Francisco (United States)
Abstract: Clinical researchers often collect multiple images from separate modalities (sources) to investigate fundamental questions of human health that are inadequately explained by considering one image source at a time. Viewing the collection of images as multiple objects, the successful integration of multi-object data produces a sum of information greater than the individual parts, but this integration can be challenging due to the complexity induced by the different topological structures of the objects. We will show a novel joint prior formulation that integrates information from networks and structural images to draw inferences on brain regions significantly related to the language score predictive of Primary Progressive Aphasia. The principled Bayesian framework allows precise characterization of the uncertainty in ascertaining a region being actively related to the language score. Our framework is implemented using an efficient Markov Chain Monte Carlo algorithm. Empirical results with simulated data illustrate substantial inferential gains of the proposed framework over its popular competitors. Our framework yields new insights into the relationship of brain regions with PPA, offering neuro-degeneration pathways for PPA. We will also show strategies to draw scalable inferences with large data using random data compression approach.