Submission 599
Using Pairwise Estimation for Diffusion Model Parameters: A Comparison Between Stan and BayesFlow
SymposiumTalk-01
Presented by: Shanqing Gao
The drift–diffusion model (DDM) is widely used in psychology and neuroscience to decompose the cognitive processes underlying decision making. Many research scenarios require rapid or repeated estimation of DDM parameters, such as analyzing large-scale datasets, conducting simulation-based power analyses, or implementing adaptive experimental designs. In these settings, the same model must be fitted many times across datasets or conditions, making amortized Bayesian inference methods (e.g., BayesFlow) an appealing alternative to traditional MCMC approaches. However, the high training cost and limited generalizability of amortized inference can restrict its applicability across different experimental designs.Composite likelihood provides a promising strategy to increase generalizability and reduce computational burden by approximating the full likelihood with likelihoods defined over subsets of the data. A simple and effective variant is pairwise composite likelihood, where likelihoods are computed over every unique pair of conditions rather than jointly across all conditions. This reduces training effort while preserving the fast amortized inference that BayesFlow offers.To evaluate this approach, we conducted parameter recovery studies using simulated data from simple DDMs, comparing composite likelihood with full likelihood estimation in both BayesFlow and Stan (as a benchmark). Results show that composite likelihood achieves parameter recovery comparable to full likelihood in both frameworks. Crucially, composite likelihood in BayesFlow greatly reduces inference time relative to Stan, offering a scalable and efficient solution for repeated DDM estimation.