16:30 - 18:00
Parallel sessions 3
16:30 - 18:00
Room: HSZ - N5
Chair/s:
Elisabeth Hein
In order to perceive and meaningfully interact with the world around us, our sensory systems need to interpret the incoming information. This interpretation process is well illustrated in the case of illusions. With some illusions we perceive very different things in one and same input, as for example in the famous Necker cube or “The dress”, which can be seen blue and black or white and golden. Other illusions make us perceive colors where there are none, as in the watercolor illusion, or cause-and-effect relationships and animacy with simple dots. Therefore, illusions are a wonderful tool to understand more about how perception works. In the symposium, we will look at this question using a variety of different experimental methods and very different illusions in order to learn more about different aspects of perception ranging from auditory motion perception to robotic vision. In particular, in the first talk Meike Kriegeskorte and colleagues will use auditory apparent motion to investigate which factors influence how object correspondence is established, i.e. object identity is perceived despite changes in location across time. In the second talk Shalila Freitag and colleagues will talk about EEG correlates of perceptual (un-)certainty and the role of stimulus predictability when participants observe stimuli with varying degree of ambiguity/visibility (Necker lattices and smiley faces). In the third talk Ben Sommer and colleagues will investigate perceived causality in a paradigm in which a disc can either be perceived as launching another disc or as passing across the other disc. In particular, they use visual adaptation to look at the influence of a launch or pass context on an ambiguous display. In the fourth talk Vebjørn Ekroll will use examples of magic tricks around the illusion of absence that work better than one would expect based on the method of the trick and how perception works. In the last talk Aravind Rao Battaje and colleagues will present work on whether robotic perceptual models could predict population-level and individual human responses to visual illusions, using the example of the fill-in color aftereffect and Silencing by motion.
Submission 643
Studying Visual Illusions Through Robotic Perceptual Models
SymposiumTalk-05
Presented by: Aravind Battaje
Aravind Battaje 1, 3, Angelica Godinez 2, 3, Nina Hanning 2, 3, Martin Rolfs 2, 3, Oliver Brock 1, 3
1 Robotics and Biology Laboratory, Technical University of Berlin, Germany
2 Department of Psychology, Humboldt-University, Berlin, Germany
3 Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
Robotic and human perception both need to extract complex structures from sensorimotor information in real time. This suggests a possible similarity in computational mechanisms despite differing physical substrates. Motivated by this conjecture, we investigated whether robotic perceptual models could shed light on computational mechanisms underlying human vision. We used Active InterCONnect (AICON), a modeling framework that formulates perception as joint constraint resolution. In other words, it represents perceptual problems as networks of interdependent processes that simultaneously satisfy multiple constraints.

To study human vision through AICON, we examined two visual illusions as case studies: the Fill-in Color Aftereffect (Van Lier et al., 2009) and Silencing by Motion (Suchow & Alvarez, 2011). These illusions reveal constraints between shape and color perception, and between luminance and motion perception, respectively. AICON models for these illusions replicated human responses with high fidelity, even capturing individual variability. The results reveal that, when viewed through the lens of AICON's computational mechanisms, surprising diversity in human perception can be attributed to basic model parameters such as time constants for color or motion processing. In essence, we demonstrate through the study of visual illusions that a modeling framework originally developed for robotics can help uncover fundamental mechanisms underlying human perception.