15:00 - 16:30
Submission 286
Does Optional Stopping Lead to Higher Error Rates than Fixed-N Designs in the Bayesian T-Test?
Posterwall-13
Presented by: Linus Szillat
Linus SzillatFrieder GöppertSascha MeyenVolker H. Franz
Department of Computer Science, University of Tübingen, Germany
Optional stopping is the practice of stopping data collection as soon as the data is decisive enough. This can be beneficial when samples are costly. While standard frequentist tests lose their error rate guarantees under optional stopping, Bayesians typically argue that optional stopping presents no problem for them. However, Sanborn & Hills (2014, Psychonomic Bulletin & Review) demonstrated that optional stopping does have implications on error rates for typical Bayesian tests, too. Because their analysis was restricted to optional stopping for only one decision boundary (H1), we consider the more realistic case of optional stopping for two decision boundaries (H0 and H1): We compare error rates of the Bayesian t-test using either optional stopping or fixed-N designs across different effect sizes. For fixed-N designs we use three variants of how to determine the sample size: (1) Selecting N that maximizes decisions for H1 (MAX), (2) selecting N as the average length of optional stopping (STOP AVG), and (3) selecting N according to the whole distribution of optional stopping lengths (SAME DIST). Interestingly, we find that for very small effect sizes, the type 2 error rate of MAX is higher than with optional stopping, while for larger effects, the type 2 error rate of MAX, STOP AVG and SAME DIST is lower than with optional stopping. This suggests that the relationship between optional stopping and fixed-N designs regarding error rates in the Bayesian t-test is more nuanced than typically thought.