Submission 380
Quantifying the Reliability of Browser-Based Speech Onset Measurement for Online Language Production Experiments
Posterwall-41
Presented by: Vincent Gruber
Accurate detection of speech onset is critical for studying the time course of language production. Recent work shows that classic effects such as semantic interference (~20 ms) can be replicated in online picture–word interference and naming paradigms, suggesting that web-based voice-onset measures can be sensitive to small differences. What remains largely unknown, however, is how accurately browser-based systems map real acoustic onsets onto recorded reaction times. We go beyond effect replication and provide a direct calibration of online speech-onset measurement. Across two browser-based experiments, we programmatically simulate vocal responses and compare the intended onset of the sound to the measured onset logged by the experiment software. In Experiment 1, we use pure tones presented at controlled delays after stimulus onset to quantify systematic offsets, jitter, and the correlation between scheduled and recorded onset times. In Experiment 2, we repeat this approach with naturalistic speech recordings that mimic picture-naming responses, testing whether speech-like amplitude envelopes and phonetic structure introduce additional variability. Both experiments use the same timing and recording pipeline as a typical online language production study and are run across several common browsers. By quantifying the precision and consistency of browser-based voice-onset detection, this work offers an assessment of how trustworthy online speech timing measures are at the millisecond scale.