Title: Reinforced designs for observational studies: Multiple instruments plus control groups as evidence factors
Authors: Bikram Karmakar - University of Florida (United States) [presenting]
Dylan Small - University of Pennsylvania (United States)
Paul Rosenbaum - University of Pennsylvania (United States)
Abstract: Absent randomization inference about the effects caused by treatments depends upon assumptions that can be difficult or impossible to verify. Causal conclusions gain strength from a demonstration that they are insensitive to moderate violations of those assumptions, especially if that happens in each of several statistically independent analyses that depend upon very different assumptions; i.e. if several evidence factors concur. These issues often arise when the investigator has several possible instruments, together with the option of a direct comparison of treated and control subjects. Does each purported instrument satisfy the stringent assumptions required of an instrument? Is a direct comparison without instruments biased by self-selection into the treatment and control? In this context, we develop a method for constructing evidence factors, and we evaluate the performance of the method in terms of design sensitivity. In the application, we consider the effectiveness of Catholic versus public high schools, constructing three evidence factors from three past strategies for studying this question. Although these three analyses use the same data, we: (i) construct three essentially independent statistical tests that require very different assumptions, (ii) study the sensitivity of each test to the assumptions underlying that test, (iii) examine the degree to which independent tests dependent upon different assumptions concur, (iv) pool evidence across independent factors.