To enable individualized displays of personality and emotion in Embodied Conversational Agents (ECAs), a generic agent architecture is augmented to generate variable idiosyncratic gesturing behaviors. A set of dimensions of expressivity that characterize individual variability is proposed along with a mapping of the identified dimensions onto low-level animation parameters. Gesture synthesis is modified at multiple planning stages. Semantic information about the structure and communicative function of the behaviors is taken into account to guide modifications. The implementation is tested in two evaluation studies with large groups of non-expert users.
Towards affective agent action: Modelling expressive ECA gestures / B., Hartmann; Mancini, M; C., Pelachaud. - (2005). (Intervento presentato al convegno International conference on Intelligent User Interfaces-Workshop on Affective Interaction tenutosi a San Diego).
Towards affective agent action: Modelling expressive ECA gestures
MANCINI M;
2005
Abstract
To enable individualized displays of personality and emotion in Embodied Conversational Agents (ECAs), a generic agent architecture is augmented to generate variable idiosyncratic gesturing behaviors. A set of dimensions of expressivity that characterize individual variability is proposed along with a mapping of the identified dimensions onto low-level animation parameters. Gesture synthesis is modified at multiple planning stages. Semantic information about the structure and communicative function of the behaviors is taken into account to guide modifications. The implementation is tested in two evaluation studies with large groups of non-expert users.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.