In this paper, we investigate the detection of laughter from the user's nonverbal full-body movement in social and ecological contexts. Eight hundred and one laughter and nonlaughter segments of full-body movement were examined from a corpus of motion capture data of subjects participating in social activities that stimulated laughter. A set of 13 full-body movement features was identified, and corresponding automated extraction algorithms were developed. These features were extracted from the laughter and nonlaughter segments, and the resulting dataset was provided as input to supervised machine learning techniques. Both discriminative (radial basis function-support vector machines, k-nearest neighbor, and random forest) and probabilistic (naive Bayes and logistic regression) classifiers were trained and evaluated. A comparison of automated classification with the ratings of human observers for the same laughter and nonlaughter segments showed that the performance of our approach for automated laughter detection is comparable with that of humans. The highest F-score (0.74) was obtained by the random forest classifier, whereas the F-score obtained by human observers was 0.70. Based on the analysis techniques introduced in the paper, a vision-based system prototype for automated laughter detection was designed and evaluated. Support vector machines (SVMs) and Kohonen's self-organizing maps were used for training, and the highest F-score was obtained with SVM (0.73).
Automated Laughter Detection from Full-Body Movements / Niewiadomski, Radoslaw; Mancini, Maurizio; Varni, Giovanna; Volpe, Gualtiero; Camurri, Antonio. - In: IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS. - ISSN 2168-2291. - 46:(2016), pp. 113-123. [10.1109/THMS.2015.2480843]
Automated Laughter Detection from Full-Body Movements
MANCINI, MAURIZIO;VOLPE, GUALTIERO;
2016
Abstract
In this paper, we investigate the detection of laughter from the user's nonverbal full-body movement in social and ecological contexts. Eight hundred and one laughter and nonlaughter segments of full-body movement were examined from a corpus of motion capture data of subjects participating in social activities that stimulated laughter. A set of 13 full-body movement features was identified, and corresponding automated extraction algorithms were developed. These features were extracted from the laughter and nonlaughter segments, and the resulting dataset was provided as input to supervised machine learning techniques. Both discriminative (radial basis function-support vector machines, k-nearest neighbor, and random forest) and probabilistic (naive Bayes and logistic regression) classifiers were trained and evaluated. A comparison of automated classification with the ratings of human observers for the same laughter and nonlaughter segments showed that the performance of our approach for automated laughter detection is comparable with that of humans. The highest F-score (0.74) was obtained by the random forest classifier, whereas the F-score obtained by human observers was 0.70. Based on the analysis techniques introduced in the paper, a vision-based system prototype for automated laughter detection was designed and evaluated. Support vector machines (SVMs) and Kohonen's self-organizing maps were used for training, and the highest F-score was obtained with SVM (0.73).I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.