This work proposes a new way for providing feedback to expressivity in music performance. Starting from studies on the expressivity of music performance we developed a system in which a visual feedback is given to the user using a graphical representation of a human face. The first part of the system, previously developed by researchers at KTH Stockholm and at the University of Uppsala, allows the real-time extraction and analysis of acoustic cues from the music performance. Cues extracted are: sound level, tempo, articulation, attack time, and spectrum energy. From these cues the system provides an high level interpretation of the emotional intention of the performer which will be classified into one basic emotion, such as happiness, sadness, or anger. We have implemented an interface between that system and the embodied conversational agent Greta, developed at the University of Rome "La Sapienza" and "University of Paris 8". We model expressivity of the facial animation of the agent with a set of six dimensions that characterize the manner of behavior execution. In this paper we will first describe a mapping between the acoustic cues and the expressivity dimensions of the face. Then we will show how to determine the facial expression corresponding to the emotional intention resulting from the acoustic analysis, using music sound level and tempo characteristics to control the intensity and the temporal variation of muscular activation. © Springer-Verlag Berlin Heidelberg 2006.

From acoustic cues to an expressive agent / Mancini, Maurizio; Bresin, Roberto; Pelachaud, Catherine. - (2006), pp. 280-291. (Intervento presentato al convegno 6th International Gesture Workshop, GW 2005 tenutosi a Berder Island, fra) [10.1007/11678816_31].

From acoustic cues to an expressive agent

MANCINI, MAURIZIO;
2006

Abstract

This work proposes a new way for providing feedback to expressivity in music performance. Starting from studies on the expressivity of music performance we developed a system in which a visual feedback is given to the user using a graphical representation of a human face. The first part of the system, previously developed by researchers at KTH Stockholm and at the University of Uppsala, allows the real-time extraction and analysis of acoustic cues from the music performance. Cues extracted are: sound level, tempo, articulation, attack time, and spectrum energy. From these cues the system provides an high level interpretation of the emotional intention of the performer which will be classified into one basic emotion, such as happiness, sadness, or anger. We have implemented an interface between that system and the embodied conversational agent Greta, developed at the University of Rome "La Sapienza" and "University of Paris 8". We model expressivity of the facial animation of the agent with a set of six dimensions that characterize the manner of behavior execution. In this paper we will first describe a mapping between the acoustic cues and the expressivity dimensions of the face. Then we will show how to determine the facial expression corresponding to the emotional intention resulting from the acoustic analysis, using music sound level and tempo characteristics to control the intensity and the temporal variation of muscular activation. © Springer-Verlag Berlin Heidelberg 2006.
2006
6th International Gesture Workshop, GW 2005
Computer Science (all); Biochemistry; Genetics and Molecular Biology (all); Theoretical Computer Science
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
From acoustic cues to an expressive agent / Mancini, Maurizio; Bresin, Roberto; Pelachaud, Catherine. - (2006), pp. 280-291. (Intervento presentato al convegno 6th International Gesture Workshop, GW 2005 tenutosi a Berder Island, fra) [10.1007/11678816_31].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1528184
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 3
social impact