Submission 458
Size Discrimination in Perception and Action
SymposiumTalk-05
Presented by: Kriti Bhatia
Ganel et al. (2012, PLoS One) reported that grasping is more accurate than perceptual judgements in discriminating object size: When they presented participants with objects of slightly different sizes (0.5 mm), participants’ grip apertures during grasping accurately reflected this difference, while the accuracy of perceptual judgements (responses: “small” vs. “large”) was seemingly low at 59% (close to chance-level 50%). This was considered as further evidence that visual information is differently processed for action versus perception in the dorsal versus ventral cortical streams, respectively (Perception-Action Model); with actions assumed to be more veridical than perception. However, grip apertures cannot be directly compared to perceptual accuracy. Meyen et al. (2022, JEP:G) showed that the same underlying information can lead to a clear separation of the means (as in grasping), yet result in poor classification accuracy (as in perception). To solve this issue, continuous measures (grip apertures) can be dichotomized to calculate a corresponding (grasping) classification accuracy, which can then be compared to the perceptual accuracy. Following this idea, we conducted an improved replication of Ganel et al. (2012) with 48 participants and applied this new analysis. We found that grasping classification accuracy was 52.1 ± 0.6% while perceptual judgement accuracy was much higher at 66.9 ± 1.2%. We also reanalyzed published results from other studies on size discrimination and obtained consistent results (53.1 ± 1.2% in grasping vs. 65.3 ± 1.3% in perceptual judgement). These results raise doubts about the assumption that grasping is more veridical than perception.