Seeing the invisible: Online use of rich physical constraints in perception
Mon-H4-Talk 2-1701
Presented by: Vivian Paulun
Visual perception of dynamical physical properties, such as viscosity or softness, is tricky: The observable behavior of an object depends on both its internal properties as well as external factors. For instance, how much an object deforms depends on its internal structure (compliance) and the force applied to it. Thus, the brain needs to disentangle and interpret the interplay of many scene factors. What if these external scene factors were invisible? Here we test this idea using computer graphics. Specifically, we simulated short sequences of different types of materials (liquid, granular, non-rigid) interacting with various rigid objects. Crucially, only the target material/object was rendered, not any of the surroundings. Observers were as accurate in matching the material properties in these reduced stimuli as in a control task with fully rendered videos of the same interactions. Strikingly, we found that observers did not just perceive the target material in rich detail, but also perceived physical 3D objects the material was interacting with. In fact, observers were able to accurately select the underlying invisible shape in an asymmetric matching task. These results can be understood if the brain imputes the hidden objects in a physically accurate manner. Thus, our results are consistent with the hypothesis that people use an internal generative physics model online in perception. In contrast, we found that artificial neural networks trained on object or motion recognition show lower accuracy in jointly estimating material properties and object shape and their underlying representation is inconsistent with that of human observers.
Keywords: perception, cognition, vision, intuitive physics, material perception