Humans adaptively select different learning strategies in different tasks
Mon-H2-Talk 3-3005
Presented by: Pieter Verbeke
The Rescorla-Wagner rule remains the most popular tool to describe how humans quickly learn to adapt to novel tasks and develop task-specific representations. Nevertheless, it cannot fit human learning in tasks that require higher levels of control. Previous work proposed several hierarchical extensions of this learning rule. However, it remains unclear when a flat (non-hierarchical, like Rescorla-Wagner) versus a hierarchical strategy is optimal, or when humans implement it. To address this question, we applied a nested modelling approach to evaluate multiple models in multiple reinforcement learning tasks both computationally (which approach performs best) and empirically (which approach fits human data best). We consider ten empirical datasets (N = 410) divided over three reinforcement learning tasks of varying complexity (and thus control requirements). Our results demonstrate that different tasks are best solved with different learning strategies; and that humans adaptively select the learning strategy that allows best performance. Specifically, while flat learning fitted best in simple tasks, humans employed hierarchically layered models in more complex tasks.
Keywords: Adaptive model selection, Hierarchical learning, Cognitive control, task representations