Connecting the sample size of mental sampling with working memory capacity
Tue-H5-Talk 6-6506
Presented by: Xiaotong Liu
The sampling framework offers an integrative perspective on how people make probability judgments, proposing that they approximate probabilities by sampling instances from memory or through simulation (Costello & Watts, 2014). Although sampling-based models have reproduced various effects in probability judgments, criticism has emerged regarding the lack of connections between model terms and psychological processes (Coenen et al., 2018). Addressing this critique, we tested the assumed positive association between a crucial model term—the sample size of mental sampling—and individual differences in working memory capacity (WMC), a widely assumed relation in the sampling framework (Lloyd et al., 2019). To assess this association, we used the coherence of participants' probability judgments as a proxy for sample size, reasoning that larger samples are less vulnerable to sampling variability. Adopting a novel event-ranking task, participants provided rankings for sets of events, each comprising two pairs of complementary events, {A, not-A, B, not-B}. A logically correct ranking adheres to the complement rule: when A is ranked above B, not-A should be ranked below not-B. The probabilities of participants providing logically correct rankings manifest the level of coherence in probability judgments. We found a positive association between WMC and coherence when participants ranked very likely or unlikely events but not when ranking events with probabilities close to 0.5. This interaction seems to align with predictions from our sampling-based model for the event ranking task, allowing for different interpretations. Our findings provide evidence for the assumed link between WMC and the sample size of mental sampling.
Keywords: Probability judgments, Mental sampling, Working memory