Submission 643
Studying Visual Illusions Through Robotic Perceptual Models
SymposiumTalk-05
Presented by: Aravind Battaje
Robotic and human perception both need to extract complex structures from sensorimotor information in real time. This suggests a possible similarity in computational mechanisms despite differing physical substrates. Motivated by this conjecture, we investigated whether robotic perceptual models could shed light on computational mechanisms underlying human vision. We used Active InterCONnect (AICON), a modeling framework that formulates perception as joint constraint resolution. In other words, it represents perceptual problems as networks of interdependent processes that simultaneously satisfy multiple constraints.
To study human vision through AICON, we examined two visual illusions as case studies: the Fill-in Color Aftereffect (Van Lier et al., 2009) and Silencing by Motion (Suchow & Alvarez, 2011). These illusions reveal constraints between shape and color perception, and between luminance and motion perception, respectively. AICON models for these illusions replicated human responses with high fidelity, even capturing individual variability. The results reveal that, when viewed through the lens of AICON's computational mechanisms, surprising diversity in human perception can be attributed to basic model parameters such as time constants for color or motion processing. In essence, we demonstrate through the study of visual illusions that a modeling framework originally developed for robotics can help uncover fundamental mechanisms underlying human perception.