09:00 - 10:30
Parallel sessions 7
09:00 - 10:30
Room: HSZ - N4
Chair/s:
Benjamin Gagl
Visual word recognition and reading are central to human communication. Still, literacy rates are declining, increasing the need for better reading education and interventions for readers with low skill levels. At the beginning of such developments, one must understand the cognitions underlying reading. Here, we combine presentations that provide current developments in reading research, investigating how language, script, and memory influence visual word recognition processes in behavior and brain activation. We will start with a study by Sabrina Turker, which investigates the influence of language and memory skills on reading disabilities. The second study, by Benjamin Gagl, examines the influence of which items are stored in the lexicon on orthographic processing in visual word recognition behavior and brain responses. The third study, by Amelie Hague, investigates script familiarity on brain response dynamics. The fourth study, by Maz Mohamed, analyzes how learning to read in different languages influences the process of lexical access. Finally, Jana Hasenäcker presents a large-scale study of German lexicon decision data, which is essential to exploring novel hypotheses built on consensus-based guidelines, embracing open science methodology. The symposium relies on behavioral and brain findings across studies using implemented theoretical approaches through computational models, and offers an overview of the availability of novel datasets. Thus, this symposium delivers a comprehensive update on the neuro-cognitive processes implemented in reading and visual word recognition, including current theoretical advancements. 
Submission 490
The Influence of Knowledge on Perception: Model-Based Study of How Lexical Knowledge Optimizes Visual Word Representations
SymposiumTalk-01
Presented by: Benjamin Gagl
Benjamin GaglJanos Pauli
University of Cologne, Germany
When we read, our brain extracts meaning from words, but the extent to which our lexical knowledge aids this process is unclear. Here, we utilize a transparent computational model that simulates human orthographic behavior to examine the influence of word knowledge on orthographic processing. The model incorporates pixel, letter, and letter-sequence representations based on the neuronally plausible predictive coding assumption, offering a framework for investigating the integration of sensory input with knowledge (i.e., as a top-down prediction). Here, we test different lexicon structures from which the model derives the top-down predictions. For model evaluation, we use a pseudoword learning dataset that allowed us to know which items participants learned. We compare three lexicon assumptions: (i) including only learned items, (ii) all words that participants should have learned, and (iii) a lexicon only including the foils. We analyzed response times and errors and found the highest model fit for the representations that integrated the lexicon with only the learned words, including all three representations. Thus, with this model-based approach, we find substantial support for a direct link between lexical knowledge and downstream orthographic processing, indicating that representations of visual words are optimized based on our lexical knowledge to implement efficient reading.