In this paper we focus on three aspects of multimodal expressions of laughter. First, we propose a procedural method to synthesize rhythmic body movements of laughter based on spectral analysis of laughter episodes. For this purpose, we analyze laughter body motions from motion capture data and we reconstruct them with appropriate harmonics. Then we reduce the parameter space to two dimensions. These are the inputs of the actual model to generate a continuum of laughs rhythmic body movements. In the paper, we also propose a method to integrate rhytmic body movements generated by our model with other synthetized expressive cues of laughter such as facial expressions and additional body movements. Finally, we present a real-time human-virtual character interaction scenario where virtual character applies our model to answer to human's laugh in real-time.
Rhythmic Body Movements of Laughter / Niewiadomski, Radoslaw; Mancini, Maurizio; Ding, Yu; Pelachaud, Catherine; Volpe, Gualtiero. - (2014), pp. 299-306. (Intervento presentato al convegno 16th International Conference on Multimodal Interaction tenutosi a Istanbul, Turkey) [10.1145/2663204.2663240].
Rhythmic Body Movements of Laughter
MANCINI, MAURIZIO;VOLPE, GUALTIERO
2014
Abstract
In this paper we focus on three aspects of multimodal expressions of laughter. First, we propose a procedural method to synthesize rhythmic body movements of laughter based on spectral analysis of laughter episodes. For this purpose, we analyze laughter body motions from motion capture data and we reconstruct them with appropriate harmonics. Then we reduce the parameter space to two dimensions. These are the inputs of the actual model to generate a continuum of laughs rhythmic body movements. In the paper, we also propose a method to integrate rhytmic body movements generated by our model with other synthetized expressive cues of laughter such as facial expressions and additional body movements. Finally, we present a real-time human-virtual character interaction scenario where virtual character applies our model to answer to human's laugh in real-time.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.