In this paper, we depict a system to accurately control the facial animation of synthetic virtual heads from the movements of a real person. Such movements are tracked using Active Appearance Models from videos acquired using a cheap webcam. Tracked motion is then encoded by employing the widely used MPEG-4 Facial and Body Animation standard. Each animation frame is thus expressed by a compact subset of Facial Animation Parameters (FAPs) defined by the standard. We precompute, for each FAP, the corresponding facial configuration of the virtual head to animate through an accurate anatomical simulation. By linearly interpolating, frame by frame, the facial configurations corresponding to the FAPs, we obtain the animation of the virtual head in an easy and straightforward way.

A non-invasive approach for driving virtual talking heads from real facial movements / Gabriele, Fanelli; Fratarcangeli, Marco. - (2007), pp. 117-120. (Intervento presentato al convegno 1st International Conference on 3DTV tenutosi a Kos; Greece nel MAY 07-09, 2007) [10.1109/3dtv.2007.4379425].

A non-invasive approach for driving virtual talking heads from real facial movements

FRATARCANGELI, Marco
2007

Abstract

In this paper, we depict a system to accurately control the facial animation of synthetic virtual heads from the movements of a real person. Such movements are tracked using Active Appearance Models from videos acquired using a cheap webcam. Tracked motion is then encoded by employing the widely used MPEG-4 Facial and Body Animation standard. Each animation frame is thus expressed by a compact subset of Facial Animation Parameters (FAPs) defined by the standard. We precompute, for each FAP, the corresponding facial configuration of the virtual head to animate through an accurate anatomical simulation. By linearly interpolating, frame by frame, the facial configurations corresponding to the FAPs, we obtain the animation of the virtual head in an easy and straightforward way.
2007
1st International Conference on 3DTV
3d motion animation; active appearance models; face tracking; facial animation; inverse compositional algorithm
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
A non-invasive approach for driving virtual talking heads from real facial movements / Gabriele, Fanelli; Fratarcangeli, Marco. - (2007), pp. 117-120. (Intervento presentato al convegno 1st International Conference on 3DTV tenutosi a Kos; Greece nel MAY 07-09, 2007) [10.1109/3dtv.2007.4379425].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/322502
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 0
social impact