A Validity-based Method for Conducting Interdependent Experiments
Tue-HS1-Talk IV-01
Presented by: Patrick Smela
The experiment is the method for testing causal relationships between psychological constructs. While method books extensively discuss techniques that allow for a causal interpretation, they typically do not explicitly discuss how to plan a set of studies that maximizes our understanding of a causal relation. When designing experiments, researchers must make compromises between conflicting aspects. This might, for example, mean optimizing the design for internal or external validity. Each compromise imposes limitations on the explanations a study provides. Typically, authors list those limitations in the discussion section, leaving it up to follow-up studies to expand on the findings. When enough studies are conducted, eventually, a meta-analysis can be conducted to sum up the state of the literature. This practice is largely unsystematic. Instead, studies can be conducted in interconnected batches, using studies specifically to address the limitations of its predecessor. Deliberately designing multiple investigations for the same phenomenon opens the possibility of changing the experimental manipulations' operationalizations and testing different moderators and mediators of a causal relation. This systematic approach allows researchers to paint a more comprehensive picture of a phenomenon instead of only contributing pieces here or there. In such a validity-based framework of interdependent studies, researchers plan studies that systematically test aspects of statistical, internal, external, or construct validity. In a review of articles from Journal of Personality and Social Psychology, we explore to what extent researchers already use such a validity-based framework.
Keywords: Methods; Validity; Meta-Science