Given a corpus of news items consisting of images accompanied by text captions, we want to find out "who's doing what", i.e. associate names and action verbs in the captions to the face and body pose of the persons in the images. We present a joint model for simultaneously solving the image-caption correspondences and learning visual appearance models for the face and pose classes occurring in the corpus. These models can then be used to recognize people and actions in novel images without captions. We demonstrate experimentally that our joint 'face and pose' model solves the correspondence problem better than earlier models covering only the face, and that it can perform recognition of new uncaptioned images.
Who's doing what: Joint modeling of names and verbs for simultaneous face and pose annotation / Luo, Jie; Caputo, Barbara; Ferrari, Vittorio. - STAMPA. - (2009), pp. 1168-1176. (Intervento presentato al convegno 23rd Annual Conference on Neural Information Processing Systems, NIPS 2009 tenutosi a Vancouver, BC; Canada nel 07-10 December 2009).
Who's doing what: Joint modeling of names and verbs for simultaneous face and pose annotation
CAPUTO, BARBARA;
2009
Abstract
Given a corpus of news items consisting of images accompanied by text captions, we want to find out "who's doing what", i.e. associate names and action verbs in the captions to the face and body pose of the persons in the images. We present a joint model for simultaneously solving the image-caption correspondences and learning visual appearance models for the face and pose classes occurring in the corpus. These models can then be used to recognize people and actions in novel images without captions. We demonstrate experimentally that our joint 'face and pose' model solves the correspondence problem better than earlier models covering only the face, and that it can perform recognition of new uncaptioned images.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.