Working memory (WM) is central to human cognition, underpinning a wide range of complex cognitive functions. Many daily activities, like reading or following a conversation, depend on it. It is a dynamic system that undergoes substantial changes throughout childhood, and consequently, its interactions with other cognitive systems also evolve. Understanding the effects of WM development is therefore essential for elucidating broader cognitive growth.
This symposium brings together researchers investigating the development of working memory in childhood through complementary perspectives, ranging from large-scale adaptive data modeling to experimental and eye-tracking approaches.
In this symposium, we will first target the question of how WM capacities develop and to what extent WM is necessary for developing mathematical abilities in primary school children. In the second part, we will focus on proactive functioning, that is, the capacity to anticipate and prepare ourselves for a task. We will discuss when it emerges in WM, how it develops across ages, and how to assess the presence or absence of proactive strategies. Finally, we will discuss the links between the sensorimotor system and WM, by presenting the effect of a body movement-based strategy on WM performance.
Collectively, these insights will offer a comprehensive and diverse overview, unified by a shared emphasis on the mechanisms and developmental trajectories of working memory in childhood.
As artificially intelligent systems become embedded in daily life, understanding the cognitive foundations of our interactions with them is essential for shaping the future of human-technology relations. This symposium brings together complementary perspectives that examine how humans think, perceive, and interact with intelligent systems, focusing on social robots and large language models (LLMs). The studies contribute toward a deeper understanding of contexts and cues under which we perceive and act toward AI as social units or actors (Gambino et al., 2020; Nass et al., 1994). The first contribution by Katharina Kühne compared the perception of robotic and human agents, through motor resonance, finding that both evoke comparable implicit motor responses irrespective of anthropomorphic detail or biomechanical feasibility. These results highlight how humans simulate robotic agents internally. The second study by Jairo Perez-Osorio examined how the reliability of a humanoid robot’s gaze affects human–robot collaboration, finding that consistent gaze improved attentional alignment, task efficiency, and coordination, while unreliable gaze disrupted performance. The findings highlight the critical role of social cues in supporting adaptive joint action with artificial agents. Two further contributions focus on the communication with chatbots. In four rounds Anita Körner compared the performance in a classic referential communication task between a basic version of a conversational agent (Chat-GPT) versus a version that was prompted to use grounding strategies. She found that time per round decreased, even more so for the group who interacted with the conversational agent prompted with grounding strategies, indicating more common ground. Lea Petrasch investigated whether humans apply linguistic perspective taking when communicating with chatbots (LLMs). Adapting Keysar’s (1994) paradigm on the illusory transparency of intention, results showed an egocentric bias in judgements of the chatbots’ understanding. To round things off, Marcel Binz will discuss foundational unified models of human cognition. Models that not only predict, simulate, and explain behavior in a single domain but instead offer a unified take on our mind. Together, these contributions foster understanding on how humans make sense of artificial communicators and how cognition and perception of such can be studied best in a digital social world.
Karin Maria Bausenhart, Markus Huff, Jeffrey M Zacks
Human cognition is shaped by the way humans perceive and segment continuous, dynamic, complex, and multimodal perceptual input into meaningful, discrete episodes. Such events and their boundaries (transitions that separate meaningful units of experience from each other) play a crucial role in structuring memory, guiding attention, and enhancing understanding. Perceiving an event boundary – for example, triggered by changes in time, location, protagonist, goal, or social interaction – evokes updates in working memory and thereby prompts the formation of new or adaptation of existing event models. This segmentation process may thus enhance comprehension and recall by creating clear divisions between contexts, allowing individuals to better encode, retrieve, and reason about sensory experience. Events and their boundaries also influence predictive processes: within a given event, reliable forecasts can be made based on contextual continuity and abstract event schemata, but predictions become less reliable when crossing event boundaries. Recent models suggest that increased uncertainty and error in predictive processing in itself may drive the updating of event models in working memory, thus reinforcing the link between predictive processing and event segmentation. Overall, events and their boundaries serve as fundamental units of organization in cognitive processing, enabling humans to make sense of and coherently act upon a dynamic and often unpredictable world. In this symposium, we will present novel empirical and theoretical developments from psychology and cognitive science that explore the functions and mechanisms of event cognition. We will focus in particular on how boundaries affect the perception and segmentation (vs. integration) of dynamic input, how event models are formed within and across modalities, and how dynamic input, schema-based prediction, and contextual factors interplay to shape event representations and higher-level cognitive processes such as categorization, memory, and problem-solving.
The ability to judge one’s own confidence is a core ability of human metacognition. Giving informative confidence ratings is crucial in many situations: Humans making decisions, either individually or in groups, rely on their own estimates of uncertainty. But finding adequate measures for the quality of confidence ratings is a challenge. The two major approaches to tackle this challenge will be contrasted in this symposium: model-based and model-free. On the one hand are computational process models of the formation of confidence ratings in humans. The first speaker, Matthias Guggenmos, provides an overview and categorization of these models. One of them is the most prominent model, which is an extension of classical signal detection theory (where perceptual sensitivity, d’, is measured). This metacognitive extension analogously measures metacognitive sensitivity, meta-d’. Together with nine others, this prominent model is evaluated on a collection of 13 experimental data sets by the second speaker, Manuel Rausch. His results should concern researchers in the field: The meta-d’/d’ model does not provide satisfactory results. The third speaker, Simge Hamaloglu, drills deeper into the model's mechanisms: As in classical signal detection theory, the meta-d’/d’ model estimates (metacognitive) criteria that determine the point where low turns into high confidence. She focuses on these criteria to differentiate when a stimulus is actually perceived versus only inferred from other cues. Contrasting these model-based approaches, on the other hand, classical information theory has inspired approaches to measuring metacognitive ability in a model-free way. The fourth speaker, Sascha Meyen, introduces this idea in which metacognitive ability is measured as transmitted information (in bits). Taken together, this symposium will pinpoint the contention between model-based and model-free approaches to measuring metacognitive ability. It will highlight challenges in terms of empirical fit and interpretability, and thereby guide future development of both approaches in tandem.
A growing body of literature documents that perception and action are supported by short-lived bindings between stimulus and response features. Notably, the relationship of binding and retrieval processes and learning mechanisms is complex and a point of ongoing debate in current cognitive research. While the concepts of binding and retrieval as proposed in action control research, e.g., by the Binding and Retrieval in Action Control (BRAC) framework, closely resemble processes in learning and memory on a theoretical level, empirical findings largely oppose a close relation. In this symposium, we explore recent views on the relations between binding and retrieval and learning processes across different types of learning effects.
We will present findings from a broad range of experimental paradigms like stimulus-response and response-response binding, contingency learning, and evaluative conditioning.
These data will be used to highlight different perspectives on the intersections of binding and learning effects. Five talks will unravel how potent factors like contingency awareness, number and frequency of presentations, and time since the last stimulus occurrence affect binding/retrieval and/or learning effects. Together, these findings further our understanding of the relation between binding and learning.
When making decisions or providing a judgment, individuals often seek and receive advice from others. They may ask a friend whether or not they should spend their holidays in Japan and how much they should plan to budget for such a stay. Advice taking is typically investigated in a judge-advisor system. The judge provides an initial judgment before being introduced to the advice of the advisor. Afterward, the judge provides their final judgment. In this symposium, we combine advances in advice-taking research, outlining new perspectives for the field.
The first contribution demonstrates that individual differences can affect whether and how individuals take advice, and how much these influences have been overlooked. The second contribution presents a meta-analysis on how advice taking varies across different study contexts and designs using a dual hurdle model. The third contribution compares advice taking based on aggregated versus non-aggregated advice from multiple advisors, investigating why aggregated advice is heeded more by judges. The last two contributions focus on advice taking from algorithms. The fourth contribution investigates algorithmic advice demonstrating that without explicit communication advice can shape competition and collaboration among individuals. Finally, the fifth contribution examines algorithmic and hybrid advice combining human and algorithmic advice, demonstrating no algorithm aversion but instead algorithm appreciation.
Negation has long been a central topic in psychology, linguistics and the cognitive sciences with interest in its nature and functions continuing to grow. Understanding negation is cognitively demanding: negative sentences are often associated with higher processing costs and error rates. A prominent view holds that comprehending negation involves representing two mental models - the negated situation and the actual one - and selectively inhibiting the former. Despite the early emergence of no in children’s vocabularies, full mastery of sentential negation develops relatively late. Beyond its role as a logical operator, negation serves diverse discourse functions, from denying plausible assumptions to correcting misinformation. While negation is a linguistic universal, its realization varies substantially across languages, and the processing consequences of these differences remain underexplored. Moreover, the influence of negation extends beyond language, shaping memory, attitudes, and behavior.
Part 1 of this double symposium examines how negation is typically interpreted, which mechanisms are engaged, and how these processes play out cross-linguistically. Elena Albu asks how negation interacts with relative adjectives (Is a boy who is not short of medium height - or tall?). Claudia Maienborn and Frauke Buscher use denial contexts to contrast rejections of world-knowledge violations with rejections of semantic violations. Mechteld Van den Hoek Ostende probes whether inhibitory control is routinely recruited by studying children with ADHD, who often show difficulties with inhibition. Daniel Maurer employs negated cues in a spatial cueing paradigm to test whether comprehenders can orient directly to the actual facts or must first activate - and then inhibit - the negated alternative. Finally, Svetlana Mnogogreshnova compares Spanish and German, asking whether the earlier placement of the negation marker in Spanish relative to German modulates the mechanisms engaged during processing.
Narrative is not just a way to package information; it is cognitive infrastructure with its specific architecture. This symposium, spanning five perspectives, shows how narrative architecture shapes thinking across contexts.
The first talk investigates how narrative competence supports mathematics, especially word problems. An online adult study linked narrative skills to performance (reaction time and accuracy) across problem types, including carry/borrow operations and tasks that varied number relevance. This points to cross-domain links between narrative skill and math problem-solving. The second talk shows how early oral narrative macrostructure predicts later reading comprehension, with strong evidence in Greek–German bilingual learners. The results align with models like the Simple View of Reading and stress the value of early narrative abilities for later literacy. The third talk explores how coherence relations connect parts of a text and guide meaning making. Interpretations of the same relation can shift with language and cultural perspective. Using an annotation-based approach across originals and translations, the work maps these differences and explores computational models to capture them. The fourth talk shows how literary reading may be shaped not only by text-internal features (language use, themes) but also by extrinsic cues, i.e. paratextual information such as signals of a work’s canonical status. An online study shows how paratextual cues about a novel excerpt’s literary quality (Booker Prize nomination vs. none displayed on the cover) influence story perception, reading experience, and text processing. The fifth talk examines the effect of narrative structure on revising mental models after corrections (debunking effect) versus sticking to misinformation (continued influence effect). Study 1 varied psychological distance (Germany vs. another continent) and emotional valence (positive vs. negative); Study 2 varied correction design, testing whether including and ordering explanations (why the misinformation is false) impacts debunking.
To sum up, across these studies, narrative appears as a basic mental tool across domains that selects what matters, links ideas, and guides belief updating. From math to bilingual literacy, from cross-linguistic interpretation to paratext effects and misinformation correction, the common message is clear: shaping narrative structures and cues can meaningfully steer learning, comprehension, and reasoning.
The facilitated integration of technology into people's lives highlights the importance of examining its impact on experience and behavior. Experimental approaches help to determine the underlying psychological processes of this impact. This symposium summarizes experimental studies examining various contexts of technology use and psychological aspects of Engineering Psychology and Human Factors. Applying various experimental approaches these talks address major concepts of Engineering Psychology and Human Factors, such as situation awareness, cognitive load, technology adaptation in classical domains such as human-AI interaction, human-automation interaction, teleoperation, and highlight the value and the feasibility of rigorous experimental approaches also in complex and applied settings. The first talk by Alexander Reisinger examines how much lead time remote drivers need to effectively regain situation awareness and safely take control of highly automated vehicles during event-based remote driving tasks, highlighting the benefits of providing augmented visual information from the vehicle. The second talk by Andreas Schrank explores how different camera perspectives and visual augmentations influence remote assistants’ performance and situation awareness when supervising highly automated vehicles, showing that the optimal perspective depends on the driving scenario and that augmentation can compensate for poor visibility in adverse weather. The third talk by Matthias Arend introduces and validated a new implicit measure of situation awareness called SAMBA, comparing it with established explicit methods and showing that combining SAMBA with the traditional SAGAT approach can provide a more comprehensive and less intrusive assessment of operator awareness during teleoperation tasks. The fourth talk by Romy Müller examines how people evaluate AI image classifications using concept-based explainable AI, showing that participants preferred explanations with image snippets that precisely matched the original image and rated generalized or imprecise explanations significantly lower—indicating that users value precision over robustness in AI interpretations. The fifth talk by Judith Josupeit highlights the benefits of using virtual reality (VR) for rigorous experimental manipulations in applied contexts. In addition, the talk demonstrates how AI can be used in VR-experiments.
The introduction of new technologies has always shaped societies. Artificial intelligence (AI) applications, especially AI chatbots, are already part of everyday human life. Robots – for example in healthcare but also in other service areas – are also becoming more and more common. Generally, perceptions of these new technologies are mixed. Whereas some of them are widely accepted (e.g., use of generative AI tools like ChatGPT or DeepL), others are highly controversial (e.g., use of AI in classrooms or robots as companions in elderly homes). This raises the question of which factors influence human perceptions of and ultimately human interaction with AI and robots? The aim of this symposium is to present novel insights into human-AI and human-robot interaction by taking three different perspectives: (i) how far do social perceptions also extend to robots and how does this influence the interaction with robots?, (ii) what factors shape humans’ interaction with generative AI tools and how does such an interaction impact them?, (iii) do people differ in their perception of AI and robots? To provide answers to these questions, the first talk investigates the perception of robots as social actors. More specifically, the talk focuses on how similar robots are perceived to be, for example, to human partners. The second talk then tackles the question whether established social heuristics (such as the bystander effect) govern human behavior toward robots. Moving from embodied artificial actors to generative AI tools, the third talk focuses on the influence of external factors (explainability, content, culture) on the perception of an AI chatbot. The fourth talk investigates factors influencing the choice to use generative AI as a cognitive offloading tool and its consequences for human memory and performance. In the final talk, the question of whether Artificial Intelligence and robots are perceived differently is discussed. Jointly, these talks provide a broad overview of human-AI and human-robot interaction by examining the topic from different perspectives.
The Simon effect – characterized by quicker responses when the location of the imperative stimulus corresponds to the position of the required response – has been recognized for decades (Simon, 1969) and is considered a valuable window into fundamental mechanisms of information processing. Simon effects have been found to vary depending on both spatial arrangement of stimuli and participants’ intentions, highlighting that the underlying cognitive processes are flexible and subject to modulation. Traditionally, Simon tasks are administered to individuals working in isolation, which does not reflect the inherently social nature of humans.
Therefore, this symposium spotlights the role of social information in modulating the Simon effect. Across four presentations and an integrative discussion, speakers will examine how agency attributions, perspective taking, and the presence of a co-actor influences performance in the Simon task. The findings will illuminate the social foundations of the classic Simon task.
A feeling of confidence accompanies most of our decisions – whether we face uncertainty, evaluate risks and rewards, or make repeated choices over time. As research on metacognition expands beyond the domain of isolated perceptual judgments, computational models of confidence are increasingly being applied to dynamic and value-driven contexts, providing new insights into how people monitor and adjust their beliefs across decisions. This symposium brings together recent work that explores how confidence is formed, updated, and used in valuation and learning.
The session opens with Robin Vloeberghs who connects the first and second session with a critical perspective on the common assumption that individual decisions are independent. He demonstrates how fluctuations in internal decision criteria systematically influence confidence across repeated choices.
Second, Sebastian Hellmann, introduces a computational framework that integrates Cumulative Prospect Theory with an SDT–like confidence model, jointly capturing risky decision making and metacognitive evaluation, thereby connecting valuation under uncertainty with the principles highlighted across the other talks.
Going from description to learning risky outcomes, Rebecca West uses computational modelling to examine how people monitor their uncertainty when generalizing knowledge from learned risky options to unfamiliar ones. She investigates the strategies people use to infer the mean and variance of unknown payoff distributions through similarity-based generalisation, and how they track their own uncertainty in making these inferences.
Mean and variability are also key aspects of the context in many other learning paradigms. Alexandre Lietard investigates how confidence adapts to such environmental changes in value-based learning, showing that participants’ confidence increases with overall reward magnitudes—even when accuracy does not improve because of higher variability.
Going beyond classical reinforcement learning, Florian Scholten explores metacognition as certainty in attitude acquisition. By visualizing trajectories of confidence accompanying binary choices in evaluative probabilistic learning, he detects patterns of uncertainty reduction in forming positive and negative attitudes.
Together, the symposium compiles an integrative picture of how confidence is generated and updated across the domains of valuation, perception, and learning. By combining formal modeling with empirical data, this symposium highlights principles that link decision uncertainty, subjective confidence, and adaptive behavior within a unified computational framework.
We examine recent advances in the psychophysical investigation of cognitive representations and mechanisms. The overarching question is how we can use psychophysical measurement to learn something about the cognitive representations and their functional relevance in the human mind. We will investigate questions in the domains of time and size perception as well as motion prediction and will apply advanced psychophysical methods to these questions. F. Wichmann will give a general overview of how internal visual representations can be estimated. R. Johansson and P. Kelber will present recent work on time perception: R. Johansson will discuss time and intensity judgements, and P. Kelber will present boundary conditions for visual duration discrimination. D. Oberfeld-Twistel will discuss how biases observed in pedestrians' arrival time estimation for approaching vehicles can be captured by a Bayesian observer model. Finally, K. Bhatia will ask what we can learn from visual size discrimination about the cognitive representations underlying the visual guidance of perception and action.
Beyond the more traditional paradigm of advice taking, which is at the center of the symposium “New advances in advice taking research: Cognitive, social, and algorithmic perspectives”, this symposium highlights paradigms and cases in which dependency among individuals is less structured. In five talks, we present and discuss evidence from paradigms featuring dependent judgments and belief updating and highlight how others influence individuals’ judgments and beliefs in various ways.
The first contribution highlights how planned missing data designs can help us measure belief updating when no initial judgment is elicited. The second contribution draws on sequential collaboration, a method for aggregating estimates that does not include initial judgments, and examines whether contributors are influenced by social information about others. The third contribution investigates how framing and repetition of information systematically influence not only subjective truth judgments but also confidence, which in turn has been associated with reduced information seeking. In the fourth contribution, belief updating to scientific hypotheses is compared under different ordering and under either sequential or simultaneous presentation of evidence. Finally, the last contribution examines how trust in science shapes individuals’ belief updating for scientific evidence considering both source expertise and ambiguity of evidence.
Negation has long been a central topic in psychology, linguistics and the cognitive sciences with interest in its nature and functions continuing to grow. Understanding negation is cognitively demanding: negative sentences are often associated with higher processing costs and error rates. A prominent view holds that comprehending negation involves representing two mental models—the negated situation and the actual one —and selectively inhibiting the former. Despite the early emergence of no in children’s vocabularies, full mastery of sentential negation develops relatively late. Beyond its role as a logical operator, negation serves diverse discourse functions, from denying plausible assumptions to correcting misinformation. While negation is a linguistic universal, its realization varies substantially across languages, and the processing consequences of these differences remain underexplored. Moreover, the influence of negation extend beyond language, shaping memory, attitudes, and behavior.
Part 2 turns to acquisition and to influences of negation beyond language proper. Ulrike Schild shows that even two-year-olds struggle with sentential negation: an eye-tracking study finds no processing difference between “This is a mora” and “This is not a mora.” Chiara Boila examines whether preschoolers—whose executive functions are still maturing—face particular difficulty with negative utterances that require maintaining two pieces of information simultaneously. The remaining three contributions explore how negation shapes cognition outside the linguistic system: Emanuel Schütt investigates its role in attitude formation; Parker Smith tests ironic effects of negation on behavior; and Amit Singh asks how negative utterances influence event apprehension and contrast.
In our modern aging society, individuals are required to maintain functional independence well into old age. Cognitive deficits associated with aging can therefore have a detrimental impact on everyday functioning and quality of life. Hence, it is essential to better understand how cognitive processes change during healthy and pathological aging.
This symposium addresses this question by examining age-related changes in associative memory, arithmetic processing, and multitasking. Complementing experimental research methods, event-related potentials and multinomial modeling approaches were employed to identify the underlying mechanisms subserving cognitive functions. The presented studies involve a wide range of samples, spanning from non-clinical samples (healthy older adults) compared to younger adults to subclinical (older adults with subjective cognitive decline) and clinical samples (Parkinson’s disease with or without cognitive impairment) compared to healthy controls. This methodological variety reflects the opportunities and challenges in the research field on cognitive aging.
The scientific study of human memory can be approached from many different angles, for example by focusing on basic questions about memory or on questions that arise with regard to memory in daily life or in applied settings. This symposium will showcase the great variety of current memory research by bringing together researchers who pursue different research questions – some of them addressing basic characteristics of memory, others addressing memory in application. The first talk by Mohammad Hamzeloo will consider how odors can be turned into associative cues that are effective in evoking memories. The second talk by Mira Schwarz and Kai Homburger will examine how memories of odors can shape spatial orientation and support smell-based navigation. The third talk by Alp Aslan will look into memory for object locations and forgetting effects mediated by selective memory retrieval. The fourth talk by Jan Rummel and Luca Leon Bieling will deal with eyewitness source memory and the influence of ethnic bias on the cheater-detection benefit. Finally, the fifth talk by Marius Böltzig will focus on collective remembering, in particular future thinking of Ukrainians during the Russian invasion of their country. In addition to the five talks, a final discussion slot will provide an opportunity to discuss overlap and joint themes across studies, as well as potential avenues for future research.
With the rapid advancement in artificial intelligence technologies and increased media coverage of robots, humans are becoming more aware of artificial agents, some even testing them by interacting with online agents like chatbots. This exposure provides a unique opportunity to study how humans generalize social perception beyond biological agents, offering insights into the flexibility and boundaries of social cognition.
Understanding how people perceive and interact with these agents is central not only to the design of effective human-robot collaboration but also to uncovering fundamental aspects of social cognition. When and why do people attribute sociality, intentionality, or even moral capacities to machines? And how do seemingly simple cues in robot behavior shape complex human perceptions?
This symposium brings together five empirical contributions investigating the psychological mechanisms underlying complex attributions toward robots in diverse interaction contexts. It explores how perceptions of a robot’s social nature influence cooperative engagement and trust, how subtle behavioral or paralinguistic cues shape impressions of humanness and identity, and how movement patterns guide inferences about underlying intentions or mental capabilities.
Taken together, these investigations reveal how both low-level perceptual cues and higher-order cognitive evaluations jointly shape human responses in collaborative and observational contexts, offering new insights into how social cognition operates at the boundary between human and artificial agents.
The series of talks aims to foster interdisciplinary discussion among experimental psychologists, cognitive scientists, and roboticists, offering new insights into how humans make sense of increasingly social machines and what this reveals about the architecture of human social cognition.
Written script, including numbers, letters, and letter-strings, is a recent cultural invention central to letter- and number-literate societies. In such societies, humans learn early on to recognise glyphs and map them fluently onto specific sounds and concepts. This symposium explores how the brain achieves this objective, using multiple complementary lenses to understand the processing of linguistic and mathematical symbols, the degree to which these representations are distinct, and interactions between visual symbol recognition and abstract processes of language and numerosity.
The first talk introduces a predictive coding-motivated computational model of letter recognition, showing how these principles might explain the recognition of letters in noisy environments. This work suggests that predictive coding accounts of word recognition may also apply to isolated letters. The second talk uses an optimal transport framework to model the space of early visual representations of letter symbols revealed by EEG, exploring how such representations may be altered in dyslexia. This work tests whether this learning disorder in reading also results in weaker neural alignment with computational models of letter representations. The third talk presents an analysis of human fMRI and macaque electrocorticography responses to naturalistic images, suggesting a shared prominent representation of stimuli related to both orthography and numerosity. This finding is discussed in relation to the notion of proto-architecture for mathematical cognition in the higher-level visual cortex of non-human primates. The fourth talk examines interactions between the processing of Arabic digits and language. This study exploits the discrepancy between the base-10 system of Arabic numbers and base-20 system of French number words, finding that native French speakers utilise language during a numerical task, even when language is redundant. The fifth talk explores how the brain processes words with varying degrees of misspelling. Using MEG data, it examines connectivity between lower visual areas and the lvOT, suggesting that lvOT processes real words in a feedforward manner but engages feedback mechanisms for misspelled words and pseudowords.
Combining experimental and computational approaches, this symposium advances our understanding of how the brain maps arbitrary visual forms into meaningful symbolic representations, and how these processes interact with language and numerosity.
Research on metacognition investigates how people understand and regulate their own cognitive processes. This symposium addresses how metacognitive monitoring judgments are formed and how they influence effective learning. The first two talks focus on the underlying basis and accuracy of metacognitive judgments: Schulz, Bröder, and Undorf show that people integrate multiple cues when making metacognitive control decisions. Leipold and Berthold find that Judgments of Remembering and Knowing (JORKs) differ from traditional Judgments of Learning (JOLs) in memory processes, although the previously reported accuracy advantage of JORKs was not replicated. In the third talk, Schaper and Ingendahl present evidence on how metacognitive judgments shape item and source memory. The last two talks provide insights into more applied aspects of metacognition: Zawadzka and Hanczakowski show how feedback motivates learners to solve general knowledge facts themselves. Finally, Undorf, Ingendahl, Janson, Wissel, and Münzer demonstrate that JOLs predict learning behavior and success in a higher education learning setting. Together, the talks provide new insights into the mechanisms and consequences of metacognitive monitoring for learning and memory.
In order to perceive and meaningfully interact with the world around us, our sensory systems need to interpret the incoming information. This interpretation process is well illustrated in the case of illusions. With some illusions we perceive very different things in one and same input, as for example in the famous Necker cube or “The dress”, which can be seen blue and black or white and golden. Other illusions make us perceive colors where there are none, as in the watercolor illusion, or cause-and-effect relationships and animacy with simple dots. Therefore, illusions are a wonderful tool to understand more about how perception works. In the symposium, we will look at this question using a variety of different experimental methods and very different illusions in order to learn more about different aspects of perception ranging from auditory motion perception to robotic vision. In particular, in the first talk Meike Kriegeskorte and colleagues will use auditory apparent motion to investigate which factors influence how object correspondence is established, i.e. object identity is perceived despite changes in location across time. In the second talk Shalila Freitag and colleagues will talk about EEG correlates of perceptual (un-)certainty and the role of stimulus predictability when participants observe stimuli with varying degree of ambiguity/visibility (Necker lattices and smiley faces). In the third talk Ben Sommer and colleagues will investigate perceived causality in a paradigm in which a disc can either be perceived as launching another disc or as passing across the other disc. In particular, they use visual adaptation to look at the influence of a launch or pass context on an ambiguous display. In the fourth talk Vebjørn Ekroll will use examples of magic tricks around the illusion of absence that work better than one would expect based on the method of the trick and how perception works. In the last talk Aravind Rao Battaje and colleagues will present work on whether robotic perceptual models could predict population-level and individual human responses to visual illusions, using the example of the fill-in color aftereffect and Silencing by motion.
Recent technological and cultural change has introduced new and fast-evolving challenges to social perception and evaluation. Deepfakes, misinformation, artificial agents, and other technologically mediated phenomena pose novel inputs to these systems, and the psychological mechanisms through which perceivers interpret, believe, and evaluate them remain incompletely understood.
This symposium examines these issues through the lens of the beholder’s share: the extent to which what we “see” in others—including artificial agents—is shaped by predictions grounded in prior knowledge, beliefs, and emotional context. The psychology of the perceiver matters ever more as technologies produce increasingly convincing social signals, such that beliefs and contextual cues—like knowing an image is fake—become distinctive factors in determining their impact.
Emotion plays a central role across the five presentations, each addressing how affective meaning interacts with belief and authenticity in social cognition. Julia Baum’s talk examines EEG correlates of social evaluation under potential misinformation, identifying neural markers of susceptibility and effective intervention. Alexander Leonhardt investigates how intentionality evoked by affective knowledge and robot appearance jointly shape mind attribution and moral-emotional evaluation of humanoid robots. Martin Maier’s presentation explores how emotionally relevant deepfake faces and scenes influence neural responses and evaluations, revealing asymmetries in how positive versus negative content is discounted when believed to be artificial. Annika Ziereis will show behavioral and EEG data examining the processing of naturally photographed or AI-generated facial expressions, evaluating how the actual and perceived authenticity of the emotional cues influences neural and behavioral responses. Finally, Jana Vanek turns to the mechanisms of social-perceptual change itself, showing through EEG studies of meme-like humor how new contextual information can trigger sudden perspective shifts and reorganize social meaning in real time.
Together, these investigations illuminate how emotions, beliefs, and expectations guide perception in increasingly uncertain and often artificially social environments. By focusing on the interplay between psychological processes and cultural-technological transformations, the symposium aims to advance understanding of how humans navigate authenticity, agency, and moral evaluation in a rapidly changing social world.
Language production––far from happening in the vacuum––is shaped by socio-emotional and thematic contexts and the goals and qualities of social interactions. This symposium explores how semantic, social, and emotional aspects shape language production at different levels of granularity, from the access to the mental lexicon to free verbal interactions. The symposium kicks off with three talks exploring the continuous naming paradigm, known to induce cumulative semantic interference (CSI), i.e., slower naming with each additional member of a (semantic) category being named. Marisha Herb presents pooled analyses of seven experiments and introduces cosine similarity as a unifying measure to quantify different types of semantic relations. The next two talks use browser-based applications of the same paradigm to examine subtypes of thematic relations that have received comparably little empirical attention so far: Dimitra Tsiapou investigates emotional language production, exploring how emotional action verbs (related to basic emotions happiness, sadness, fear, anger, disgust, surprise) elicit an emotion-specific CSI effect. Annika Speckhahn examines social context in language production, demonstrating how words from associatively related social categories (children’s play, conflict, parenthood) shape the CSI effect. Shifting focus to interactive language use, Kirsten Stark presents findings from three online experiments on verbal deception and honesty. She shows that while lying is slower than truth-telling, truth-telling is far from being immune to the social-deceptive context, highlighting the role of planning, control, and monitoring processes involved. Finally, Giusy Cirillo takes a further turn towards real-life interactions and explores how early vocabulary acquirement is shaped by social alignment between toddlers and caregivers: Using a multiphase experimental paradigm with free interaction, referential, and object recognition tasks, she explores how 22- and 30 month-old toddlers’ early language acquisition is modulated by the way caregivers adapt their language to the toddler’s age and knowledge. Throughout these talks, the symposium aims to showcase innovative experimental, browser-based, and response-time sensitive methods for studying language production in both experimental and real-life contexts and various age groups.
The symposium will be chaired by Kirsten Stark and Prof. Dr. Rasha Abdel Rahman1 (rasha.abdel.rahman@hu-berlin.de). Prof. Abdel Rahman will not give a talk herself.
Automatisation is crucial for humans’ everyday functioning: it helps release limited cognitive resources and relatively effortlessly process information in a repeatable manner. As a workhorse of cognition, automatisation does not always work to our advantage. If the task requires non-typical actions, automatisation misguides us. Such situations offer a valuable window into the nature of automatic processing and cognition in general.
Several domains, including numerical cognition, investigate the automatic processing of certain stimuli to better understand information processing.
In this symposium, we look into how numerical information can be processed automatically when it is not required by task demands. In particular, we discuss how automatically processed semantic information on numbers, specifically their magnitude can be associated with space, and the limitations of these associations in individuals with different math skills levels.
While numbers are associated with space in multiple ways (cf. Spatial-Numerical Associations), this symposium explores ways in which semantic information about numbers is triggered while not being required by task instructions, and how this information interacts with spatial processing.
First, we discuss whether the well-researched SNARC effect (i.e., association of small / large magnitude numbers with left / right response side), is triggered when asking participants to judge non-semantic features of the stimuli, such as orientation (Talk 1, V. Prpic) or colour (Talk 2, K. Cipora). We also explore whether such effects differ between the general population and professional mathematicians.
Following up on links between automaticity of number processing, its spatial associations and their relation to mathematical expertise, we discuss (Talk 3, M. Sroka) how numerical magnitude influences font size judgments (i.e., size congruity effect), and whether these associations differ between professional mathematicians and control groups.
Going beyond traditional paradigms with participants seated in front of a computer screen, we look into whether generating random numbers, which does not require magnitude processing per se, affects spatial decision making in virtual reality (Talk 4, M. Murgia).
We conclude with a talk about breaking the automaticity of S-R mappings in the SNARC effect and whether this makes the SNARC more predictive of math abilities (Talk 5, J.-P. van Dijck).