CMStatistics 2022: Start Registration
View Submission - CMStatistics
B0224
Title: Distributional robustness, replicability, and causality Authors:  Dominik Rothenhaeusler - Stanford University (United States) [presenting]
Yujin Jeong - Stanford University (United States)
Abstract: How can we draw trustworthy scientific conclusions? One criterion is that a study can be replicated by independent teams. While replication is critically important, it is arguably insufficient. If a study is biased for some reason and other studies recapitulate the approach, then findings might be consistently incorrect. It has been argued that trustworthy scientific conclusions require disparate sources of evidence. However, different methods might have shared biases, making it difficult to judge the trustworthiness of a result. We formalize this issue by introducing a ``distributional uncertainty model'', which captures biases in the data collection process. Distributional uncertainty is related to other concepts in causal inference, such as confounding and selection bias. We show that a stability analysis on a single data set allows for the construction of confidence intervals that account for both sampling uncertainty and distributional uncertainty. The proposed method is inspired by a stability analysis that is advocated for by many researchers in causal inference.