Multitasking is a frequent part of everyday life, requiring us to switch between different tasks or engage in multiple tasks simultaneously. Such situations place high demands on cognitive control. A key aspect of this control is the regulation of task sets: internal representations that guide behavior in accordance with current task demands. Using task switching, probe task and dual-tasking methods, this symposium brings together different paradigms for investigating the flexible control of task sets, thereby integrating different perspectives on the preparation, inhibition, and adaptation of task sets. We present studies on how task sets are shaped by anticipatory processes, how they may be suppressed to reduce interference, and how control mechanisms flexibly adjust based on recent experience or contextual demands. The individual talks address a range of questions within this framework: one study investigates inhibitory processes triggered by mere task preparation; another explores how changes in cue-task mappings affect reconfiguration after practice. A third contribution examines the origins of asymmetries in task switching involving different perspectives. Extending the focus to situations involving overlapping task demands, further talks investigate the dissipation of dual-task representations and how sequential demands modulate control in dual-task settings. Together, the symposium provides an integrative perspective on the dynamic regulation of task sets and aims to advance our understanding of the cognitive mechanisms that support cognitive flexibility and efficient multitasking in complex environments.
The increasing integration of artificial intelligence (AI) into everyday life – through local large language models, multimodal assistants, and personalized system designs – raises essential questions about how people perceive, trust, and interact with AI systems over time. This symposium brings together findings from an interdisciplinary longitudinal study investigating the perception and dynamics of human – AI interaction across six measurement points within one year. The project explores how individual characteristics, behavioral responses, and usage contexts jointly shape willingness to delegate to AI – both in writing and decision-making contexts – as well as perceptions of trust, credibility, closeness, and self-efficacy.
The first contribution presents data from the initial wave of data investigating predictors of individuals’ willingness to delegate writing tasks to AI. The second contribution investigates how trust and related perceptual facets—credibility, creepiness, and mind perception—shape people’s willingness to delegate decision-making to AI systems over time. The third contribution addresses perceived closeness and behavioral intention, examining how perceptions of AI agents as social actors—rather than mere tools—change over time and how these changes relate to feelings of loneliness, perceived intelligence, and behavioral intentions to use AI. The fourth contribution examines how perception of AI as tool or social actor and interaction modality (text vs. voice) shape credibility perceptions. Finally, the fifth contribution examines whether the perception of AI as tool vs. social actor also has consequences for cognitive self-esteem of the user, also over time.
Together, these studies provide a comprehensive perspective on the evolving relationship between humans and AI. By integrating psychological, social, and technological viewpoints, the symposium offers novel insights into how trust, roles, and self-perceptions adapt to increasingly intelligent and omnipresent AI systems – highlighting implications for user-centered and ethically informed AI design.
Goal-directed behavior relies on cognitive control, involving processes such as goal processing and maintenance, managing conflict, as well as flexible adaptation to changing contexts. By now, it is well established that the processing of multiple goals leads to processing costs. Moreover, evidence from evolutionary and cognitive science indicates that the affective relevance of external stimuli influences the allocation of processing resources, the recruitment of attention, and ultimately guides behavior. While it is agreed upon that affect can modulate the allocation of attention and (neuro- )cognitive resources for information processing, the influence of the affective relevance of information on different cognitive control processes requires further study. This symposium explores how the affective relevance of information influences cognitive control across different cognitive control tasks. These tasks include working memory tasks, reactivating goals after interruptions, switching between different goals, and managing interference. The symposium will draw on diverse methodological approaches, such as behavioral studies, neurophysiological measures, and a meta- analysis. The selected talks feature diverse affective materials and examine varying degrees of affective relevance for response selection in the selected paradigms. Plancher et al. show that negative emotion influences both processing and attentional maintenance in working memory, supporting models that propose an attentional trade-off between these two components. Radovic and Schubert examine how affective interruptions influence goal decay and reactivation of goals when resuming a task after an interruption. Langsdorf et al. demonstrate that processing asymmetries between neutral and affective tasks modulate intentional processes, i.e., the decisions select either task. Pourtois shows that value processing is not automatic but modulated by goal relevance, with EEG evidence indicating an early, perceptual locus for this effect, supporting models in which value and goals flexibly interact to guide information processing. Finally, in a meta-analysis, Dignath et al. show how task-irrelevant emotions impact performance in conflict tasks and proposes an integrative framework suggesting that emotion influences control through distinct mechanisms depending on valence, arousal, and processing stage. Together, this symposium aims to foster discussion and provide a synthesis on how the affective relevance of information impacts different aspects of cognitive control processes in challenging task conditions.
The diffusion decision model (DDM) is a mathematical framework that jointly describes choice behavior and response time distributions, offering a process-level account of decision-making. Conceptualizing decisions as the accumulation of noisy evidence, the DDM has provided insights into the cognitive mechanisms underlying perception, attention, memory, and higher-order decision-making. Its flexibility and explanatory power have made it one of the most widely used tools in experimental psychology, bridging cognitive theory, mathematical modeling, and empirical research.
The increasing prominence of DDMs has spurred both conceptual and methodological developments. This symposium focuses on recent theoretical and computational advancements in the modeling of DDMs, including advances in estimation techniques, alternative stochastic dynamics to the Wiener process, and integrations with other modeling frameworks. Together, we aim to highlight new directions for enhancing theoretical and conceptual precision, modeling flexibility, and computational efficiency.
This symposium is the first part of a two-part series on DDMs at TeaP. While Part I emphasizes model development, theoretical extensions, and computational innovation, Part II turns to applied research, demonstrating how DDMs can help us better understand cognitive processes across different populations and domains. By being open to scholars from all areas of experimental psychology, the series offers a forum for presenting new ideas, establishing collaborations, and identifying future directions in the modeling of human cognition.
Source memory research aims at understanding how people remember the origin of information (e.g., Where did I read the latest news?). This double symposium brings together findings from both basic (Part 1) and more applied (Part 2) source memory research. The first session highlights new developments in modeling approaches as well as empirical work addressing fundamental determinants of source memory.
Beatrice G. Kuhlmann opens the session with “A Storage-Retrieval Extension of the Two-High-Threshold Multinomial Model of Source Monitoring.”, introducing an extended version of the often-used two-high-threshold multinomial model of source monitoring (2HTSM, Bayen et al., 1996). This extended version distinguishes between storage and retrieval components of source memory by incorporating two separate source-memory parameters.
Meike Kroneisen follows with “The Rare and the Common: Can Rarity Influence the Animacy Effect in Source Memory?”, investigating whether the animacy advantage in source memory depends on the relative frequency of animate and inanimate stimuli. Her talk provides insights into how base-rate expectations and attention shape encoding and retrieval.
In “Source Memory and Metamemory for Concrete and Abstract Words,” Désirée Schönung examines how people monitor their memory for different word types. The talk focuses on whether individuals distinguish between item and source memory in their metamemory judgments by recognizing that concreteness affects item but not source memory.
Further advancing model-related aspects, Hilal Tanyas presents in her talk “Modeling Latency Processes in Source Monitoring” a formal modeling approach that integrates response times to the 2HTSM. This allows estimating the relative speed of memory- and guessing-based processes.
Finally, Lena Nadarevic bridges to more applied questions of source memory in her talk “Source Effects in Memory for Truth and Falsity: A Comparison of Self-Generated Judgments and External Feedback”. Across two experiments, she investigates whether participants remember self-generated subjective truth judgments better than externally provided objective feedback, consistent with the generation effect.
Together, these talks illustrate the range of current approaches to studying source memory, advancing theoretical and methodological understanding of source memory processes. The first session concludes with a general discussion, leading over to Part II: Source Memory - Applied Research.
In this session, five finalists for the DGPs Poster Awards present their posters to the jury and further audience in brief talks:
Svenja Bährens (FOM University of Applied Sciences for Economics and Management, Institute of Economic Psychology, Germany):
The Impact of Affect and Aspirations on Risk Proneness: An Experimental Study
(Posterwall 10, Monday 15:00 to 16:00)
Felix Götz (University of Regensburg, Germany):
Conflict Processing in Asymmetric Joint Action: Co-Actor Triggered Response Conflicts Are Amplified by Free Choice
(Posterwall 46, Wednesday 15:00 to 16:00)
Kira Franke (Friedrich-Schiller-Universität Jena, Germany):
Online or Offline: Competence Does Not Modulate Observationally Acquired Stimulus-Response Binding and Retrieval Effects
(Posterwall 07, Tuesday 15:00 to 16:00)
María Paula Villabona Orozco (Hector Research Institute of Education Sciences and Psychology, University of Tübingen, Germany):
Every Move I Make: The Relationship Between Metacognition and Motor Performance in Guitar Playing
(Posterwall 19, Monday 15:00 to 16:00)
Ezgi Uzun (University of Greifswald, Germany):
Neural and Behavioral Mechanisms of Surprise-Driven Model Updating
(Posterwall 21, Wednesday 15:00 to 16:00)
Language comprehension often proceeds with remarkable speed, yet successful communication depends on the ability to slow down, revise, and adapt when input is ambiguous, unexpected, or inconsistent. This symposium brings together perspectives from neuroscience, psycholinguistics, and developmental research to examine how temporal flexibility supports coherent comprehension. “Time to think” is not a failure of processing but an adaptive resource: when comprehension is challenged, listeners and readers adjust the pace of processing to integrate conflicting cues, resolve ambiguity, and update mental representations. The symposium opens with a neurobiological perspective, showing how electrophysiological activity supports the processing of acoustic and abstract temporal structures in auditory-verbal stimuli. Studies from sentence comprehension and communicative interaction demonstrate that brain activity not only synchronizes with current stimuli but also aids the management of upcoming input through temporal estimation and prediction. These processes rely on structures such as the basal ganglia and pre-SMA, providing neural scaffolding for adaptive control. Neuroimaging evidence further shows that presupposition failures engage these circuits, indicating that discourse-related reinterpretation depends on adaptive gating and slowing mechanisms. The third talk presents psycholinguistic evidence on negation and pragmatic inference. Negation is rarely purely logical; comprehenders use it as a cue for pragmatic reasoning, revising mental models and integrating contextually relevant alternatives. Less felicitous contexts increase processing time, reflecting the additional effort required to construct a coherent interpretation. Next, a developmental perspective uses eye-tracking data from children learning German and Czech. Younger children struggle to integrate multiple linguistic cues for thematic role assignment, and their ability to reanalyse heuristics depends on language-specific features, such as word order versus case marking. Finally, ambiguity resolution in discourse is examined, showing how verb causality and adjective semantics shape pronoun interpretation. Comprehenders dynamically reweight cues integrating earlier expectations. Together, the contributions illustrate how slowing down, revising, and flexibly reallocating processing resources are central to achieving robust and coherent comprehension under uncertainty.
The Theory of Visual Attention (TVA) continues to be developed as a powerful quantitative framework for understanding attentional selection and capacity. Current research applies TVA across diverse contexts and methodologies. This symposium presents new theoretical developments, methodological advances, and applied perspectives that extend TVA’s reach and precision and sometimes challenge its present state. Estela Carmona investigates how self-relevant information shapes attentional parameters, offering insights into the role of personal significance in visual selection. Anders Petersen follows by introducing advances in modeling enumeration data within the TVA framework, extending the set of tasks that can be used with TVA. Kai Biermeier, Ngoc Chi Banh, and Ingrid Scharlau test the applicability of TVA to online scenarios and identify methodological challenges in the online estimation of attentional processing speed. Tobias Peters, Kai Biermeier, and Ingrid Scharlau apply TVA measures of attention to human–AI interaction, examining whether attentional signatures can indicate adaptive distrust. Finally, Jan Tünnermann and Ingrid Scharlau revisit lateral asymmetries in visual processing, presenting updated findings on left–right visual field differences within TVA. Together, these contributions demonstrate the continuing vitality of TVA research and its capacity to inform theoretical, methodological, and applied perspectives on visual attention but also challenges that have to be addressed. Part 2 of the symposium will turn to attentional changes in diverse populations.
A key challenge in cognitive control research is understanding how humans flexibly adjust their behaviour in response to changing environmental and motivational demands. This symposium centres on the concept of cognitive flexibility, commonly defined as the ability to shift between distinct thoughts, strategies, or perspectives in response to situational requirements. We aim to bring together complementary perspectives examining how contextual regularities, affective and motivational factors shape the dynamic balance between cognitive flexibility and stability. By combining manipulations of switch probability, conflict adaptation, and reward, the symposium highlights how experience and expectations guide adaptive control allocation across different domains.
Amy Strivens (University of Tübingen) investigates interactions between global and local control by combining switch-probability manipulations with the congruency sequence effect (CSE), offering new insights into how control operates across multiple temporal scales.
Linda Carmen Bräutigam (University of Tübingen) explores the interaction between context-specific proportion congruency (CSPC) and switch probability across three conflict paradigms (Simon, Stroop, and Flanker), revealing how adaptive control mechanisms generalize across tasks while remaining sensitive to contextual predictability.
Luca Moretti (University of Milano-Bicocca) presents evidence for a flexibility–stability trade-off when using valency rather than congruency as a measure of cognitive stability, thereby broadening theoretical accounts of how control adjustments manifest across cognitive dimensions.
Larissa Walter (University of Freiburg) examines the impact of contingent versus non-contingent reward on switch rate and switch costs in the self-organized task switching paradigm, highlighting how different reward types modulate task performance and task selection behaviour.
Finally, Jonathan Mendl (University of Regensburg) investigates how reward expectation shapes adjustments of flexibility, showing that rare high rewards heighten the sensitivity to increasing and remaining high reward.
Overall, the symposium seeks to advance our understanding of cognitive flexibility as a key mechanism of adaptive behaviour. Considering evidence across different paradigms and approaches, the symposium highlights how control dynamics emerge from the interaction of contextual expectations, task demands, affective and motivational states—offering a more comprehensive understanding of how flexible behaviour is achieved.
The development of powerful computational language models in recent years has seen an increasing application of such models in psychology and psycholinguistics. Both Distributional Semantic Models (e.g., word2vec, GloVe, etc) and Large Language Models (e.g., GPT, BERT) have been applied in two main ways: (1) as models of human language processing, and (2) as tools for generating measures that are relevant to psycholinguistic theories and hypotheses. However, the distinction between these two applications of language models is not always clear, and both applications are limited by fundamental differences in language processing between models engineered to achieve human-like performance, and the processes actually used in human language. Nevertheless, language models have demonstrated strong potential for providing insight into language processes. This symposium brings together five talks to address recent developments in the use of language models as tools for psycholinguistics, and the degree to which such models provide comparisons and outputs that are meaningful for progress in the field. The first talk will set a theoretical foundation for the symposium, evaluating caveats of comparing Large Language Models to humans, and outlining how meaningful comparisons require rigorous experimental methods. The second talk explores whether humans and language models (both Large Language Models and Distributional Semantic Models) represent abstract meaning in a similar way, while also highlighting differences emerging between the two systems. The third talk shows how Large Language Models can be used to generate new iconicity ratings for Turkish, providing a new avenue for investigating semantic dimensions in otherwise understudied languages. The fourth talk evaluates how well estimates of word frequency and familiarity derived from Large Language Models can explain children’s reading times. Finally, in the last talk, Distributional Semantic Models are applied to provide insight into the learning of morphology, showing that natural text provides sufficient information to learn the meanings of prefixes and suffixes. Together, these talks highlight the ongoing potential of language models as tools for psycholinguistics. However, these talks also provide opportunity for important discussion on the caveats of this approach, and on the scientific applications language models can support.
Categorization processes are fundamental to how humans structure, interpret, and interact with the world. They shape individual perception as well as higher-level judgments and decisions, and at the societal level play a key role in stereotyping, prejudice, and group formation. Yet, research on perceptual and social categorization has largely proceeded along separate tracks. Whereas perceptual research has aimed to identify domain-general mechanisms underlying individual categorization, social research has aimed to uncover the broader implications of categorization for group dynamics. Integrative approaches that bridge perceptual and social categorization remain rare.
Differences in research goals are mirrored by differences in methodology. Perceptual research typically relies on simplified, abstract tasks that maximize internal validity and support formal modeling, often at the expense of external validity. Social research, in turn, embeds categorization in realistic contexts that reflect lived experience and intergroup relations, but at the cost of making it harder to isolate and formalize the underlying processes. Recent methodological advances—from richer behavioral paradigms to automated theory discovery using ML/AI—create novel opportunities to combine the strengths of both traditions. These developments make it increasingly possible to study categorization in contexts that are both controlled and ecologically valid, paving the way for genuine integration of perceptual and social research.
This symposium is structured around three complementary steps in building this bridge. The first examines computational models of categorization and asks what predictions they offer for understanding social categorization. The second starts from the opposite direction, considering how phenomena of social categorization and judgment can be explained within computational frameworks. The third takes a meta-perspective, highlighting how recent methodological advances—ranging from large-scale experimentation to theory-driven simulations and formal model comparison—create new opportunities for linking the two traditions. Together, these perspectives show how research on categorization can move beyond separate traditions by uniting mechanistic explanations with social consequences and aligning methodological control with ecological relevance, paving the way toward a unified theory of categorization.
The diffusion decision model (DDM) is a mathematical framework that jointly describes choice behavior and response time distributions, offering a process-level account of how people make decisions. Conceptualizing decisions as the accumulation of noisy evidence, the DDM has provided insights into the cognitive mechanisms underlying perception, attention, memory, and higher-order decision-making. Its flexibility and explanatory power have made it one of the most widely used tools in experimental psychology, bridging cognitive theory, mathematical modeling, and empirical research.
The explanatory power and versatility of the DDM have made it indispensable for testing psychological theories. By illustrating how the model bridges the gap between quantitative modeling and psychological theory, we aim to highlight the value of DDMs for understanding individual and group differences, clinical dysfunctions, and social-cognitive processes. Together, these studies illustrate the breadth of DDM applications across experimental psychology and highlight how cognitive modeling can inform theoretical and applied research alike.
This symposium is the second of a two-part series on DDMs at TeaP. While Part I emphasizes model development, theoretical extensions, and computational innovation, Part II turns to applied research, demonstrating how DDMs can help us better understand cognitive processes across different populations and domains. By being open to scholars from all areas of experimental psychology, the series offers a forum for presenting new ideas, establishing collaborations, and identifying future directions in the modeling of human cognition.
Source memory research aims at understanding how people remember the origin of information (e.g., Where did I read the latest news?). This double symposium brings together findings of both basic (Part 1) and more applied (Part 2) source memory research. Building on the theoretical and methodological framework introduced in Part I, this second part elucidates the important role of source memory in more applied contexts.
Luise Metzger opens the session with the project “Source Memory for AI- vs. Human-Generated Online Content”, which aims to investigate whether people spontaneously categorize and recall web content as human- or AI-generated. The project further explores whether people who are less trusting toward AI show better source discrimination.
Oktay Ülker continues with “The Source of My Source: Effects of Learning Partner Expertise on Source Memory in Collaborative Learning”. This research examines how well individuals remember which source (trustworthy, untrustworthy, no source) their study partner cited in the collaboration phase. By additionally varying the expertise (high vs. low) of the learning partner, the study also tests schema-incongruency effects in source memory.
Relatedly, in “Advice Taking on Social Media: The Influence of Source Memory for Advisor Trustworthiness on Advice Weighting” Johanna Höhs focuses on how well people remember the trustworthiness of advisors encountered on social media. The study also addresses whether enhanced source memory supports adaptive advice taking – favoring trustworthy over untrustworthy advisors.
Nikoletta Symeonidou then bridges to source-memory research in specific populations. Focusing on older adults in her talk “Effects of Negative Age Stereotypes on Source Memory”, she presents findings showing that activating negative age stereotypes can impair source memory in older adults. These results highlight the importance of creating age-fair testing.
Extending the discussion to (sub)clinical populations, Isabel Porstein concludes the session with “Source Monitoring Along the Continuum of Psychosis: Insights from Schizotypy.” Her research explores how subclinical schizotypy dimensions relate to source memory and source guessing biases, particularly regarding reality-monitoring impairments, which are often discussed as mechanism underlying hallucinations.
The second session concludes with a general discussion, highlighting the main findings and potential implications of all contributions from a basic and applied perspective.
This symposium explores the multifaceted nature of aesthetic experience across neural, cultural, technological, and design domains, offering a comprehensive exploration of how aesthetic judgments emerge from complex interactions between brain, body, culture and context.
Aesthetic perception will be explored across diverse fields of investigation, ranging from man-made artifacts, i.e. art and design domains, to the perception of bodies. The research presented not only enriches theoretical perspectives but also provides empirical insights as well as practical implications. Various experimental methods are used for this purpose. To be specific, the symposium contains mixed-methods research, EEG-fMRI fusion procedures, rating studies as well as cross-cultural studies.
Together, these talks highlight the richness and complexity of aesthetic experience, demonstrating that beauty is not merely a neural response or cultural construct, but a dynamic interplay of perception, meaning, and context. By integrating neuroscience, cultural studies, immersive technology, and empirical design research, this symposium offers a holistic view of how humans engage with beauty in its many forms.
Word learning is not limited to early childhood but rather a lifelong process. As such, it is important to investigate in what ways people’s prior knowledge can shape both their ability to acquire new words as well as how these words are encoded in memory. In this symposium, we will thus explore how different types of prior knowledge—such as people’s language background or knowledge about specific words—impact word learning across different ages and learning contexts. In Talk 1, Matilde Simonetti will explore how language switching influences word learning in bilingual adults. In this context, she will discuss how knowing one word in one language can differently impact the learning of a novel word form connected to the same meaning. In Talk 2, Megan Dailey will examine the role of orthography in second-language word learning, focusing on how and under which conditions orthographic input and knowledge shape the encoding of new phonological forms in memory. Relatedly, Talk 3 by Elena Markantonakis will address how prior linguistic knowledge impacts how precisely new words are encoded, with particular attention to the retention of orthographic details after learning. In Talk 4, Marie-Christin Flohr will explore how bilingual children use statistical and prosodic cues to identify word boundaries. She will focus on the influence of the second-language learners’ first language and individual differences in listening abilities on their word learning abilities. Finally, in Talk 5, Nicole Altvater-Mackensen will investigate first-language word learning in preschoolers during shared book reading. She will use eye-tracking to measure how attention shapes children’s learning outcomes. Together, these talks showcase new perspectives on word learning, illustrating the different ways in which prior knowledge can influence word learning in first- and second-language contexts in different age groups. The talks will illuminate the mechanisms by which prior knowledge impacts how word representations are formed in memory.
Part II of the symposium on the Theory of Visual Attention (TVA) extends Part I, moving to research that highlights TVA’s potential for measuring attentional changes in diverse populations, relating them to underlying neural changes, perceptual and awareness phenomena. Simon Schrenk opens with a machine-learning study linking resting-state functional connectivity to TVA parameters—visual processing speed (C), short-term visual memory capacity (K), and top-down control (α)—in healthy older adults. This work identifies distinct neural network signatures for each attentional component, providing a framework for connecting TVA-based measures with intrinsic brain organization in aging. Hannah Klink et al. follow by demonstrating that alterations within frontoparietal networks are associated with reduced top-down control in patients with mild cognitive impairment, situating TVA within altered brain-network dynamics. Thomas Sørensen presents findings on expectancy modulations interacting with the κ parameter, offering new perspectives on attentional weighting within the TVA framework.
Solveig Menrad’s talk relates attentional parameters in patients with ADHD to subjective and objective polysomnographic measures of sleep quality in patients with ADHD. Finally, Kathrin Finke, Jan Tünnermann and Ingrid Scharlau will discuss the development and challenges of TVA. Together, these contributions aim to chart the clinical frontiers of TVA—linking theory, neural markers, and potential translational uses in diverse populations.
Everyone makes mistakes, and mistakes lead us to change the course of our action, either by stopping it or at least slowing it down, with an overall impact on subsequent decisions and behavior. Therefore, it is not surprising that human action control is governed by efficient cognitive mechanisms to monitor and regulate erroneous actions. But there is more to that. Errors not only trigger cognitive processes of monitoring and control. They are also associated with affective responses (often negative) that contribute to the chain of events initiated by the error. Moreover, although humans share basic functions for error processing, these functions may manifest differently across individuals, for example in relation to personality traits, and across contexts, such as in relation to task complexity and content.
The goal of this symposium is to provide a cross-cutting perspective on error processing research. By including contributions that use both behavioral and neurophysiological methods, the symposium seeks to bring together findings that shed light on various aspects of error processing.
The first talk examines the immediate effects of efficient error detection, which can lead to the cancellation of an ongoing erroneous action. The second talk explores the neural signatures of error processing and their links to outcomes not necessarily directly tied to mistakes. The third talk delves into individual differences, investigating how error processing relates to perfectionism. The fourth and fifth talks focus on error adjustment, offering insights respectively into affective mechanisms and into its contribution to arithmetic learning in developmental samples.
In conclusion, by providing a stage for diverse themes and methodologies in the study of error processing, this symposium set itself the task of promoting an exchange between different approaches and points of views.
Computational modeling has become an increasingly important approach for studying language generation across speaking, writing, and associative processes. Approaches range from distributional semantics and transformer-based architectures to learning-based production models, each providing different ways to formalize and investigate how meaning, structure, and behavior emerge in linguistic systems. This symposium presents five studies at the intersection of cognitive science, computational linguistics, and psycholinguistics that use these methods to better understand language production and related cognitive processes.
The symposium starts with a study investigating whether LLM-generated corpora can simulate the longitudinal development of children's texts using various psycholinguistic variables to compare the produced language. The second talk explores whether visual characteristics of pictures, beyond their conceptual or lexical representations, contribute to interference effects in picture–word-interference tasks by integrating modern vision–language embeddings with behavioral data. The third project validates centroid analysis as a method to infer concept representations from participants’ open-ended verbal responses, such as free associations, word substitutions, and feature generations with word embeddings. The fourth talk proposes a computational model that accounts for semantic interference phenomena in language production by implementing an incremental learning mechanism within an interactive production network. Finally, we present a study that investigates the psychometric capacities of LMs in the verbal fluency task, an experimental paradigm used to examine human knowledge retrieval, cognitive performance, and creative abilities.
By bringing together different perspectives, the symposium encourages a discussion on what it means to "model" language production and how such modeling can contribute to our understanding of human cognition and language processing. Together, these projects will increase our understanding of language models' potential benefits and limitations.
Theories on how the human mind represents behavioral rules and norms distinguish between explicit, verbal formats and implicit, procedural formats. Here we ask whether the latter representational format draws on fundamental cognitive mechanisms of regularity detection and statistical learning. The symposium thus connects basic, low-level approaches from cognitive psychology to the concepts of rules and rule-guided behavior. The speakers will cover cognitive fundamentals of rule representations, principles of regularity detection and rule discovery in streams of incoming stimulation, procedural learning of rules through mental simulation, and challenges derived from using negated rather than affirmative rules to steer human behavior. The contributions cover a wide range of methodologies, from movement trajectory analysis to peripheral physiology (EMG) and neuroscientific approaches (EEG, fMRI) to elucidate the question of how much rule representations draw on implicit, procedural learning.
Understanding belief systems requires insight into the mental models that underlie how individuals represent and reason about complex or contested phenomena, such as disruptive technologies or political discourses. Mental models are internal representations that describe how people understand the structure and functioning of external systems. They form the cognitive foundation of laypersons’ belief systems and shape how information and values are integrated. To investigate such belief systems, methods that capture both explicit and implicit layers of meaning are needed. This symposium presents two complementary approaches for mapping mental models that differ in their degree of explicitness and the level of participant engagement required. At the explicit end, Cognitive-Affective Maps (CAMs) visualize belief systems as networks of emotionally evaluated concepts and relations. At the more implicit end, the Triads Task captures belief systems of individuals and groups in a standardized way, based on ratings of the similarity of three stimuli.
Julius Fenn (University of Freiburg) presents tools that make CAMs applicable within experimental paradigms. These tools enable researchers to manipulate belief structures, measure changes in affective–cognitive coherence, and integrate CAMs as dependent or independent variables in controlled designs.
René Dutschke (TU Dresden) presents its roots in Kelly’s theory of personal constructs and showcases its applications as a research tool.
Irina Monno (University of Freiburg) explores the potential of CAMs as a method to capture and measure changes in belief systems by visualizing shifts in cognitive and emotional structures.
Michael Gorki (University of Freiburg) uses CAMs alongside questionnaires to examine how “laypersons” conceptualize sustainability, a highly contested concept in public, academia and policy-making.
Bettina Harder (University of Erlangen-Nuremberg) evaluates the use of CAMs in diagnostic and counseling contexts. CAMs have proven to be helpful diagnostic tools by providing in-depth information in a structured way, thereby identifying individually relevant starting points for interventions to deal with stress or test anxiety.
Together, these approaches demonstrate a continuum of mapping techniques, from explicit to implicit. By highlighting their advantages, limitations, and practical potential, the symposium provides insights into new methods for investigating belief systems related to technological, ethical, psychological, and societal issues.
Prospective memory (PM) refers to the ability to remember intended actions and execute them at a specific time point (time-based PM) or in response to a specific event (event-based PM) in the future (for an overview, see Bayen et al., 2024). PM is pivotal for goal-directed behavior in everyday life, and everyday errors frequently involve PM failures (Crovitz & Daniel, 1984; Kvavilashvili et al., 2001; Terry, 1988). Over the past decades, PM research has evolved into a broad field encompassing laboratory paradigms, naturalistic studies, neurophysiological studies and metacognitive and cognitive modeling approaches. Despite this progress, many of the key questions remain unanswered about the mechanisms supporting PM across different contexts, time frames, and age groups.
This symposium brings together recent advances from diverse domains of PM research. The first talk focuses on the functional neuroanatomy of event-based and time-based PM in healthy older adults. The second talk examines age differences in metacognitive monitoring and control processes in PM, focusing on how these mechanisms support the management of intentions across adulthood. The third talk focuses on a metacognitive path model of time-based PM, tested empirically on multiple datasets. The fourth talk introduces a novel bi-factor modeling approach that separates bottom-up spontaneous retrieval from top-down preparatory processes in event-based PM. Finally, the fifth talk introduces a new cognitive model that disentangles the prospective component—remembering that something must be done—and the retrospective components of event-based PM, namely remembering what must be done and when. Together, this symposium provides an integrative perspective on current theoretical and methodological developments in PM research and concludes with a discussion of challenges in measuring PM performance and promising directions for future work.
In this session, three early-career researchers present their project ideas and experimental designs for planned studies to receive feedback from experts in the field.
Our aesthetic experience of external stimuli is shaped by our cultural and individual backgrounds, as well as by various perceptual, cognitive and emotional processes. Empirical research has identified numerous factors that influence our aesthetic perception of stimuli, including their characteristics and the context in which stimuli are perceived. Recently, the question has also been raised about the impact of engaging with art on other areas of life. This symposium will present various approaches to empirical aesthetics research. In the first talk, Barbara Mühlbauer will ask whether two evaluation methods — rating and pairwise comparison — produce comparable aesthetic judgements and how stable these judgements remain over time. Claudia Muth's second talk will address how specific stimulus characteristics, such as the complexity of visual matrix patterns and ambivalence in artistic photographs, influence various components of aesthetic perception. She will also report results concerning the relationship between these characteristics and eye movements. In the third talk, Ronald Hübner will explore potential causes of individual preferences for different visual spiral patterns, attributing them to individual creative dispositions. In the fourth talk, Gemma Schino will explore the affective and cognitive changes that arise from engaging with meaningful artwork through interactive analysis and interaction with others. She will present a model that considers the interactive contribution of affect and cognitive strategies, drawing a connection to the general influence of emotional processes on cognition. In the final talk, Jan-Filip Tameling will present a cognitive network model mapping the concepts relevant to experiencing art. He will propose a visual art schema that could help identify the cognitive mechanisms involved in aesthetic experiences. Overall, the symposium provides a comprehensive insight into the multifaceted world of empirical aesthetics research, offering an overview of the various approaches, models, and perspectives employed.
Bilinguals require language control to regulate the activation of their known languages. Language switching paradigms are commonly used to investigate the processes underlying bilingual language control. Several approaches fall under the umbrella term “language switching,” whose defining feature is the alternation between languages, thereby requiring bilinguals to select one language over another on each trial. In this symposium, five talks present innovative research using language switching to explore language control processes in both comprehension (Talks 1-2) and production (Talks 3-4). Across studies, different paradigms (e.g., picture naming, voluntary switching, sequential switching) and methodologies, from behavioral measures to virtual reality, are employed. The focus extends beyond single-word processing to include also sentence-level processing. In Talk 1, Aaron Vandendaele examines proactive control mechanisms during language switching using a semantic classification task involving written word categorization. Luigi Falanga (Talk 2) investigates the flexibility of control and the role of interference in language-switching comprehension tasks. His study explores how recent and ongoing cross-language interference influences comprehension in complex listening contexts. In Talk 3, Andrea Philipp discusses how between-language conflict at the lemma-level shapes language control during switching. She examines the impact of cross-language interference on lexical selection and how conflict resolution processes facilitate language switching. Finally, in Talk 4, Maria Sanchez investigates sentence production in interactions with virtual interlocutors. Her study uses both voluntary and cued language-switching paradigms to examine how speakers adapt their language choice based on the interlocutor’s accent and linguistic background. Together, these talks showcase new directions in the study of bilingual language control, illustrating how innovative paradigms and technologies are reshaping our understanding of the cognitive mechanisms underlying language switching.
In visual foraging, people search continuously for multiple targets across space and time. Perceptual, attentional, and decision-making processes act together to efficiently collect visual targets from dynamic environments. This symposium addresses how flexibly humans adapt their behaviour in these complex search tasks akin to many real-world search scenarios. Thornton and Kristjánsson will discuss the impact of grouping on “foraging for change” when searching in time-varying environments. Kristjánsson et al. investigate whether cross-modal synchrony cues influence foraging. Sauter and Tünnermann demonstrate how statistical learning guides the discovery of spatiotemporal hotspots in dynamic foraging tasks, highlighting sensitivity to environmental regularities. Hughes and Clarke present advances in modelling foraging behaviour to capture the dynamics of target selection. Finally, Wiegand shows that a foraging task with memory load can uncover both cognitive impairments, as well as compensatory strategies, in patients with Korsakoff syndrome and alcohol use disorder. Together, these contributions demonstrate how adaptive foraging behaviour emerges in response to the complex demands of dynamic, interactive environments.