Computational Modeling of the Development of Abstract Object Representations
Wed—HZ_12—Talks8—8101
Presented by: Jochen Triesch
What are the origins of abstract knowledge about objects? Infants and toddlers learn about objects quite differently from today’s artificial intelligence systems. Here we aim to better understand these processes by developing computational models of how infants and toddlers acquire abstract object representations and the ability to recognize objects and object categories independent of viewpoint, distance, etc. For this, we have developed novel learning approaches for deep neural networks that exploit the temporal and/or multimodal structure of sensory information during extended interactions with objects. For example, we harness head-mounted eye tracking in toddlers and train computational models with toddlers’ first person visual input, demonstrating that strong object representations can result from just minutes of such experience. Furthermore, we highlight the benefits of toddlers’ gaze behavior for successful learning. We also consider learning in models receiving computer rendered visual inputs, where we can precisely control the input statistics. We show how additional linguistic input, even if rare and noisy, promotes the formation of abstract object categories. Furthermore, we demonstrate how our time-based learning approach can lead to the emergence of very abstract concepts such as „kitchen object“ or „garden object“. Finally, we study the role of behavior and knowledge of executed manipulation actions (e.g. how an object was turned) and demonstrate how this additional information can further enrich the learned representations. Overall, we elucidate what computational principles seem to underlie the emergence of abstract object representations in infants and toddlers.
Keywords: object perception, computational model, abstract representation