To achieve accumulation of knowledge, scholars have adopted the strategy of reproducing similar designs in different contexts and comparing the resulting estimates. These enterprises have often yielded mixed findings, with some empirical results diverging starkly from theoretical predictions. A prominent example is the literature on the effects of improving voters’ information. How are we to interpret such inconclusive evidence? Using a game theoretic model we establish that existing empirical works on the effect of information treatments do not always measure a well-defined theoretical quantity. This impedes knowledge accumulation as these empirical studies are likely to give different results even absent any internal validity concerns (studies are perfectly randomized), external validity issues (contexts are similar), or statistical noises (the number of observations is unbounded). Our paper offers several recommendations on how to ensure comparability across distinct studies; that is, to ensure that each study measures the same theoretical quantity.