16:30 - 18:00
Parallel sessions 3
16:30 - 18:00
Room: HSZ - N2
Chair/s:
Jan Pohl
With the rapid advancement in artificial intelligence technologies and increased media coverage of robots, humans are becoming more aware of artificial agents, some even testing them by interacting with online agents like chatbots. This exposure provides a unique opportunity to study how humans generalize social perception beyond biological agents, offering insights into the flexibility and boundaries of social cognition.
Understanding how people perceive and interact with these agents is central not only to the design of effective human-robot collaboration but also to uncovering fundamental aspects of social cognition. When and why do people attribute sociality, intentionality, or even moral capacities to machines? And how do seemingly simple cues in robot behavior shape complex human perceptions?
This symposium brings together five empirical contributions investigating the psychological mechanisms underlying complex attributions toward robots in diverse interaction contexts. It explores how perceptions of a robot’s social nature influence cooperative engagement and trust, how subtle behavioral or paralinguistic cues shape impressions of humanness and identity, and how movement patterns guide inferences about underlying intentions or mental capabilities.
Taken together, these investigations reveal how both low-level perceptual cues and higher-order cognitive evaluations jointly shape human responses in collaborative and observational contexts, offering new insights into how social cognition operates at the boundary between human and artificial agents.
The series of talks aims to foster interdisciplinary discussion among experimental psychologists, cognitive scientists, and roboticists, offering new insights into how humans make sense of increasingly social machines and what this reveals about the architecture of human social cognition.
Submission 105
When Less Is More: Robots Showing Cues for a Single Behavioral Characteristic Perceived as More Human-Like
SymposiumTalk-05
Presented by: Jan Pohl
Jan Pohl 1, Kristina Nikolovska 2, Francesco Maurelli 2, Arvid Kappas 2, Bernhard Hommel 3
1 Humboldt-University, Berlin, Germany
2 Constructor University, Germany
3 Shandong Normal University, China
In everyday interaction, people show a consistent tendency to anthropomorphize or attribute aspects of selfhood to nonhuman agents. In previous studies, we found that people (over-)generalize from the presence of a single behavioral cue to selfhood (like the ability to learn or attention sharing) to the presence of other (absent) cues—suggesting that cueing a small aspect of selfhood is sufficient to trigger the entire selfhood concept with all its implications. Here, we tested the prediction that single selfhood cues are as efficient as multiple selfhood cues in eliciting selfhood attributions to an artificial agent. In three experiments, we compared selfhood ratings elicited by a robot exhibiting behavioral cues of efficiency, learning sensitivity, and equifinality with ratings for a robot exhibiting only one of these cues. Contrary to our expectation, participants did not show the same degree of selfhood attribution to both robots, but more selfhood-attributions towards the single-cue robot in Experiments 1 (single efficiency cue) and 3 (single equifinality cue), while in Experiment 2 (single learning cue), the multiple-cues robot received higher ratings on the manipulated characteristics as well as on context sensitivity (results for selfhood-attribution were negligible). In sum, we conclude that a single selfhood cue tends to elicit stronger selfhood-attributions overall than multiple cues.