Engineering Human Gestures
Mon—HZ_7—Talks3—2703
Presented by: Stacy Marsella
Modeling natural, meaningful gestures in embodied virtual agents is crucial for achieving human-like effective interactions between humans and agents. This talk delves into the challenges of computationally modeling gestures for virtual agents, with a particular focus on how models can effectively capture the intricate relationship between gesture and meaning. Gestures are imbued with communicative intent, emotional expression, and context-specific meaning. To develop virtual agents that are socially expressive and contextually appropriate, it is essential to create models that reflect this complexity.
Drawing from interdisciplinary research in psychology, linguistics, and computer science, this talk will explore how gesture can be modeled to convey meaning and intention. By leveraging computational techniques, the talk will showcase how gestures can be dynamically generated based on an agent’s communicative intent and emotional states. Additionally, we will discuss the challenges in modeling not only individual gestures but also their coordination with verbal communication and the broader social context.
Recent advances in gesture modeling for virtual agents will be presented, illustrating how these systems can be used to simulate human behavior, aa well as study psychological phenomena in controlled environments. This modeling not only improves the realism and effectiveness of virtual agents but also contributes to a deeper understanding of how gestures function in human communication, enabling insights into the cognitive and social processes underlying gesture production.
Drawing from interdisciplinary research in psychology, linguistics, and computer science, this talk will explore how gesture can be modeled to convey meaning and intention. By leveraging computational techniques, the talk will showcase how gestures can be dynamically generated based on an agent’s communicative intent and emotional states. Additionally, we will discuss the challenges in modeling not only individual gestures but also their coordination with verbal communication and the broader social context.
Recent advances in gesture modeling for virtual agents will be presented, illustrating how these systems can be used to simulate human behavior, aa well as study psychological phenomena in controlled environments. This modeling not only improves the realism and effectiveness of virtual agents but also contributes to a deeper understanding of how gestures function in human communication, enabling insights into the cognitive and social processes underlying gesture production.
Keywords: Gestures, Computational Models, Embodied Virtual Agents