Title: Design-based anytime-valid causal inference
Authors: Iavor Bojinov - Harvard Business School (United States) [presenting]
Dae Woong Ham - Harvard University (United States)
Abstract: Many organizations run online randomized experiments to drive product innovation and augment decision-making. Traditionally, these have a fixed time horizon during which experimental units (the customers) receive either a treatment (the new version) or a control (the standard offering). At the end of the experiment, an analyst determines the effectiveness of the treatment relative to the control by, for example, computing the average treatment effect (ATE) and an associated confidence interval. However, as customers arrive sequentially over time, partial results are available throughout the study; unfortunately, peaking at these data invalidates the subsequent statistical inference. To overcome this, companies have started using methods to compute confidence sequences, sequences of confidence intervals that are uniformly valid over time and allow analysts to monitor the ATE continuously. We develop design-based any-time valid asymptotic confidence sequences for three settings. First, we consider subjects arriving independently so that time indexes individual units. Second, we consider running a time series experiment in which the same unit receives multiple treatments over time. Third, we consider panel experiments in which multiple units receive multiple treatments over time. Across the three settings, we allow the treatment assignment to dynamically update based on the observed historical data. Our work is partially motivated by a collaboration with Netflix.