Grappling with Grand Theories (of War):
Evaluating General Explanations of Armed Conflict Using LLM-Assisted Meta Analysis
P10-S245-3
Presented by: Tore Wig
What explains armed conflict? A large literature offers general explanations for armed conflict, yet many of these explanations compete with one another. Science typically advances by adjudicating the relative strength of general theories, but conflict scholarship remains focused on testing medium-range hypotheses about specific causal relationships. How, then, can we evaluate the overall evidentiary strength of general theories, which often imply multiple medium-range hypotheses?
Two dominant approaches address this question: qualitative literature reviews and meta-analyses of estimated relationships. Qualitative reviews are subjective and lack a systematic framework for aggregating evidence into a clear and comparable measure of theory strength. Meanwhile, meta-analyses are restricted to evaluations of causal relationships of the same form (e.g. multiple experiments on the same relationship) but struggle to map different types of evidence onto general theories, as these theories often generate overlapping empirical implications and many different types of implications (than what meta-analyses can incorporate). This paper introduces a novel approach to theory-strength evaluation, designed for evaluating general theories. It leverages Large Language Models (LLMs) to analyze theories in the domain of armed conflict, and focusing primarily on the case of the bargaining theory of war. The approach scans a set of published articles on armed conflict using a pre-specified list of empirical implications and clear evidence parameters. This can consider evidence for multiple different implications, different types of evidence, and different levels of analysis. The evidence is then aggregated using Bayesian Evidence Synthesis, providing a systematic and transparent evaluation of general theories' relative strength.
Two dominant approaches address this question: qualitative literature reviews and meta-analyses of estimated relationships. Qualitative reviews are subjective and lack a systematic framework for aggregating evidence into a clear and comparable measure of theory strength. Meanwhile, meta-analyses are restricted to evaluations of causal relationships of the same form (e.g. multiple experiments on the same relationship) but struggle to map different types of evidence onto general theories, as these theories often generate overlapping empirical implications and many different types of implications (than what meta-analyses can incorporate). This paper introduces a novel approach to theory-strength evaluation, designed for evaluating general theories. It leverages Large Language Models (LLMs) to analyze theories in the domain of armed conflict, and focusing primarily on the case of the bargaining theory of war. The approach scans a set of published articles on armed conflict using a pre-specified list of empirical implications and clear evidence parameters. This can consider evidence for multiple different implications, different types of evidence, and different levels of analysis. The evidence is then aggregated using Bayesian Evidence Synthesis, providing a systematic and transparent evaluation of general theories' relative strength.
Keywords: war, armed conflict, theory, meta-analysis, AI, methods