In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions). © Springer-Verlag Berlin Heidelberg 2005.

Contextual factors and adaptative multimodal human-computer interaction: Multi-level specification of emotion and expressivity in Embodied Conversational Agents / Lamolle, Myriam; Mancini, Maurizio; Pelachaud, Catherine; Abrilian, Sarkis; Martin, Jean Claude; Devillers, Laurence. - (2005), pp. 225-239. (Intervento presentato al convegno 5th International and Interdisciplinary Conference CONTEXT 2005 - Modeling and Using Context tenutosi a Paris, fra).

Contextual factors and adaptative multimodal human-computer interaction: Multi-level specification of emotion and expressivity in Embodied Conversational Agents

MANCINI, MAURIZIO;
2005

Abstract

In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions). © Springer-Verlag Berlin Heidelberg 2005.
2005
5th International and Interdisciplinary Conference CONTEXT 2005 - Modeling and Using Context
Computer Science (all); Biochemistry; Genetics and Molecular Biology (all); Theoretical Computer Science
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Contextual factors and adaptative multimodal human-computer interaction: Multi-level specification of emotion and expressivity in Embodied Conversational Agents / Lamolle, Myriam; Mancini, Maurizio; Pelachaud, Catherine; Abrilian, Sarkis; Martin, Jean Claude; Devillers, Laurence. - (2005), pp. 225-239. (Intervento presentato al convegno 5th International and Interdisciplinary Conference CONTEXT 2005 - Modeling and Using Context tenutosi a Paris, fra).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1528227
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 2
social impact