Submission 462
Robust Letter Recognition Through Predictive Coding: Insights from a Computational Model
SymposiumTalk-01
Presented by: Janos Pauli
Reading is a human invention that became an essential part of societal participation. Expert readers can perceive around 200-300 words per minute, indicating that they process words and letters with high efficiency. Despite the significant role of the letter-level, most models of visual word recognition neglect or oversimplify the processes underlying letter recognition. This gap is detrimental, since reading a word becomes impossible without separately identifying its constituent letters. Here, we develop a transparent neuro-cognitive model of letter recognition based on the principles of predictive coding. We assume that humans decompose visual letter information into features that describe distinct letter characteristics and shapes. These distinct features then activate font-invariant letter-prototype representations to identify the letter. To evaluate the computational implementation of the model, we conducted a letter identification task with increasing levels of noise. We collected behavioral and EEG data to compare two variants of the model: (i) one applying knowledge-based top-down influences on all processing levels assumed in predictive coding, or (ii) one that is strictly processing information in a feed-forward way. We demonstrate that the predictive-coding-based model variant best simulates behavioral response patterns. Furthermore, the model transparency enables us to understand that the key to the model's success lies in the feature-level predictions. Additionally, the internal model representations exhibit significant correlation with reaction-time data and EEG activation around 200 ms. Thus, we provide a letter recognition model that demonstrates how efficient neuronal processing, through predictive coding, results in letter identification behavior innate to optimal reading capacities.