11:00 - 12:30
Parallel sessions 2
11:00 - 12:30
Room: HSZ - N4
Chair/s:
Sebastian Hellmann
A feeling of confidence accompanies most of our decisions – whether we face uncertainty, evaluate risks and rewards, or make repeated choices over time. As research on metacognition expands beyond the domain of isolated perceptual judgments, computational models of confidence are increasingly being applied to dynamic and value-driven contexts, providing new insights into how people monitor and adjust their beliefs across decisions. This symposium brings together recent work that explores how confidence is formed, updated, and used in valuation and learning.

The session opens with Robin Vloeberghs who connects the first and second session with a critical perspective on the common assumption that individual decisions are independent. He demonstrates how fluctuations in internal decision criteria systematically influence confidence across repeated choices.
Second, Sebastian Hellmann, introduces a computational framework that integrates Cumulative Prospect Theory with an SDT–like confidence model, jointly capturing risky decision making and metacognitive evaluation, thereby connecting valuation under uncertainty with the principles highlighted across the other talks.
Going from description to learning risky outcomes, Rebecca West uses computational modelling to examine how people monitor their uncertainty when generalizing knowledge from learned risky options to unfamiliar ones. She investigates the strategies people use to infer the mean and variance of unknown payoff distributions through similarity-based generalisation, and how they track their own uncertainty in making these inferences.
Mean and variability are also key aspects of the context in many other learning paradigms. Alexandre Lietard investigates how confidence adapts to such environmental changes in value-based learning, showing that participants’ confidence increases with overall reward magnitudes—even when accuracy does not improve because of higher variability.
Going beyond classical reinforcement learning, Florian Scholten explores metacognition as certainty in attitude acquisition. By visualizing trajectories of confidence accompanying binary choices in evaluative probabilistic learning, he detects patterns of uncertainty reduction in forming positive and negative attitudes.
Together, the symposium compiles an integrative picture of how confidence is generated and updated across the domains of valuation, perception, and learning. By combining formal modeling with empirical data, this symposium highlights principles that link decision uncertainty, subjective confidence, and adaptive behavior within a unified computational framework.
Submission 186
Experimental Assessment of the Confidence Choice-Congruent Bias in Value-Based Learning
SymposiumTalk-04
Presented by: Alexandre Lietard
Alexandre LietardKobe Desender
KU Leuven, Belgium
Confidence serves as a crucial internal signal that supports adaptive processes in both learning and decision-making. This feeling reflects the estimated probability of being correct on a decision. In perceptual decision-making, one notable deviation from optimal probability computation, known as the choice-congruent bias, describes the tendency to place higher weight on evidence that favors the chosen option. Consequently, confidence often rises with the overall strength of evidence, even when accuracy does not improve.

In this study, we investigated whether a similar bias occurs in value-based learning by testing whether stronger overall evidence leads to higher confidence. Our findings show that increasing the average reward reliably increases confidence levels while letting accuracy unaffected. Nonetheless, computational modeling suggests that this effect does not necessarily stem from a biased confidence computation. Thus, although manipulating average reward offers a promising approach for separating confidence from accuracy, it may not constitute a direct measure of the choice-congruent bias.