We aim at creating an expressive Embodied Conversational Agent (ECA) and address the problem of synthesizing expressive agent gestures. In our previous work, we have described the gesture selection process. In this paper, we present a computational model of gesture quality. Once a certain gesture has been chosen for execution, how can we modify it to carry a desired expressive content while retaining its original semantics? We characterize bodily expressivity with a small set of dimensions derived from a review of psychology literature. We provide a detailed description of the implementation of these dimensions in our animation system, including our gesture modeling language. We also demonstrate animations with different expressivity settings in our existing ECA system. Finally, we describe two user studies we undertook to evaluate the appropriateness of our implementation for each dimension of expressivity as well as the potential of combining these dimensions to create expressive gestures that reflect communicative intent.

Implementing Expressive Gesture Synthesis for Embodied Conversational Agents / B., Hartmann; Mancini, M; C., Pelachaud. - 3881:(2005), pp. 188-199. (Intervento presentato al convegno Gesture in Human-Computer Interaction and Simulation, 6th International Gesture Workshop, GW 2005 tenutosi a Valoria, France) [10.1007/11678816_22].

Implementing Expressive Gesture Synthesis for Embodied Conversational Agents

MANCINI M;
2005

Abstract

We aim at creating an expressive Embodied Conversational Agent (ECA) and address the problem of synthesizing expressive agent gestures. In our previous work, we have described the gesture selection process. In this paper, we present a computational model of gesture quality. Once a certain gesture has been chosen for execution, how can we modify it to carry a desired expressive content while retaining its original semantics? We characterize bodily expressivity with a small set of dimensions derived from a review of psychology literature. We provide a detailed description of the implementation of these dimensions in our animation system, including our gesture modeling language. We also demonstrate animations with different expressivity settings in our existing ECA system. Finally, we describe two user studies we undertook to evaluate the appropriateness of our implementation for each dimension of expressivity as well as the potential of combining these dimensions to create expressive gestures that reflect communicative intent.
2005
Gesture in Human-Computer Interaction and Simulation, 6th International Gesture Workshop, GW 2005
expressivity; agent; communication
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Implementing Expressive Gesture Synthesis for Embodied Conversational Agents / B., Hartmann; Mancini, M; C., Pelachaud. - 3881:(2005), pp. 188-199. (Intervento presentato al convegno Gesture in Human-Computer Interaction and Simulation, 6th International Gesture Workshop, GW 2005 tenutosi a Valoria, France) [10.1007/11678816_22].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1528336
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 127
  • ???jsp.display-item.citation.isi??? 57
social impact