Comparing Bayesian hierarchical models of cognition via deep learning
Mon-HS1-Talk I-04
Presented by: Lasse Elsemüller
Bayesian hierarchical modeling is an increasingly popular approach to represent nested data structures that arise frequently in the psychological and cognitive sciences. Bayesian model comparison provides a principled way to select amongst competing hierarchical models but is usually intractable for non-trivial model formulations. In this talk, we introduce our recently proposed deep learning method for comparing arbitrarily complex Bayesian hierarchical models. Our simulation-based approach enables the approximation of Bayes factors or posterior model probabilities regardless of the tractability of a model’s likelihood function, thus liberating psychological modelers from a long-standing trade-off between accuracy and computability. Moreover, the amortized nature of our method allows for extensive validation on simulated data and can be leveraged to perform a-priori optimization of experimental designs (e.g. sample size determination). We present a benchmarking of our method against bridge sampling in which we test the methods’ ability to discriminate between hierarchical models of cognition. In addition, we demonstrate a comparison of hierarchical evidence accumulation models that would have been infeasible with existing methods.
Keywords: Bayesian inference, deep learning, model comparison, hierarchical modeling, cognitive modeling