11:00 - 12:30
Parallel sessions 2
11:00 - 12:30
Room: HSZ - N5
Chair/s:
Volker H. Franz, Rolf Ulrich
We examine recent advances in the psychophysical investigation of cognitive representations and mechanisms. The overarching question is how we can use psychophysical measurement to learn something about the cognitive representations and their functional relevance in the human mind. We will investigate questions in the domains of time and size perception as well as motion prediction and will apply advanced psychophysical methods to these questions. F. Wichmann will give a general overview of how internal visual representations can be estimated. R. Johansson and P. Kelber will present recent work on time perception: R. Johansson will discuss time and intensity judgements, and P. Kelber will present boundary conditions for visual duration discrimination. D. Oberfeld-Twistel will discuss how biases observed in pedestrians' arrival time estimation for approaching vehicles can be captured by a Bayesian observer model. Finally, K. Bhatia will ask what we can learn from visual size discrimination about the cognitive representations underlying the visual guidance of perception and action.
Submission 458
Size Discrimination in Perception and Action
SymposiumTalk-05
Presented by: Kriti Bhatia
Kriti Bhatia 1, Tanja Huber 1, Sascha Meyen 1, Frederic Goehringer 2, Thomas Schenk 2, Volker H. Franz 1
1 University of Tübingen, Germany
2 LMU Munich, Germany
Ganel et al. (2012, PLoS One) reported that grasping is more accurate than perceptual judgements in discriminating object size: When they presented participants with objects of slightly different sizes (0.5 mm), participants’ grip apertures during grasping accurately reflected this difference, while the accuracy of perceptual judgements (responses: “small” vs. “large”) was seemingly low at 59% (close to chance-level 50%). This was considered as further evidence that visual information is differently processed for action versus perception in the dorsal versus ventral cortical streams, respectively (Perception-Action Model); with actions assumed to be more veridical than perception. However, grip apertures cannot be directly compared to perceptual accuracy. Meyen et al. (2022, JEP:G) showed that the same underlying information can lead to a clear separation of the means (as in grasping), yet result in poor classification accuracy (as in perception). To solve this issue, continuous measures (grip apertures) can be dichotomized to calculate a corresponding (grasping) classification accuracy, which can then be compared to the perceptual accuracy. Following this idea, we conducted an improved replication of Ganel et al. (2012) with 48 participants and applied this new analysis. We found that grasping classification accuracy was 52.1 ± 0.6% while perceptual judgement accuracy was much higher at 66.9 ± 1.2%. We also reanalyzed published results from other studies on size discrimination and obtained consistent results (53.1 ± 1.2% in grasping vs. 65.3 ± 1.3% in perceptual judgement). These results raise doubts about the assumption that grasping is more veridical than perception.