Modeling Human Spatiotemporal Prediction Ability with Feedforward and Recurrent Neural Networks.
Tue—Casino_1.811—Poster2—5605
Presented by: Nisa Alo
Motion extrapolation is the ability to predict an object's future location from its past locations along a dynamic trajectory. Behavioral and neural findings show that motion extrapolation occurs across different time scales, from possibly compensating for neural transmission delays within the visual system (milliseconds) to guiding goal-oriented behavior, such as in target interception (seconds). Here, we focus on investigating how spatiotemporal predictions for the longer time scales used in human cognitive processing rely on different neural computations, and how accumulated information from an object's past locations supports spatiotemporal predictions. We trained two types of models (feedforward and recurrent neural networks) to predict different trajectory lengths over different input lengths using a novel synthetic dataset. Prediction accuracies from both models increased with increasing input length. Yet, the recurrent model showed more consistent performance across input lengths than the feedforward model. Next, we had 21 participants perform a motion extrapolation task and predict the location of a moving target under occlusion across different occlusion durations. Participant’s performance decreased with longer occlusion periods. We then compared human and network model performance in the same task. Both network models underperformed relative to humans. Together, these results suggest that spatiotemporal information integrated over an increasing number of past time points yields more robust spatiotemporal predictions, and that long-range spatiotemporal predictions might benefit from a recurrent memory component. Nevertheless, feedforward and recurrent computations alone failed to replicate motion extrapolation at the level of human participants, indicating that more complex computations are required to match human performance.
Keywords: