This paper focuses on answering fill-in-the-blank style multiple choice questions from the Visual Madlibs dataset. Previous approaches to Visual Question Answering (VQA) have mainly used generic image features from networks trained on the ImageNet dataset, despite the wide scope of questions. In contrast, our approach employs features derived from networks trained for specialized tasks of scene classification, person activity prediction, and person and object attribute prediction. We also present a method for selecting sub-regions of an image that are relevant for evaluating the appropriateness of a putative answer. Visual features are computed both from the whole image and from local regions, while sentences are mapped to a common space using a simple normalized canonical correlation analysis (CCA) model. Our results show a significant improvement over the previous state of the art, and indicate that answering different question types benefits from examining a variety of image cues and carefully choosing informative image sub-regions.

Solving Visual Madlibs with Multiple Cues / Tommasi, Tatiana; Arun, Mallya; Bryan, Plummer; Svetlana, Lazebnik; Alexander, C. Berg; Tamara, L. Berg. - ELETTRONICO. - (2016), pp. 1-13. (Intervento presentato al convegno 27th British Machine Vision Conference, BMVC 2016 tenutosi a York; United Kingdom) [10.5244/C.30.77].

Solving Visual Madlibs with Multiple Cues

TOMMASI, TATIANA
;
2016

Abstract

This paper focuses on answering fill-in-the-blank style multiple choice questions from the Visual Madlibs dataset. Previous approaches to Visual Question Answering (VQA) have mainly used generic image features from networks trained on the ImageNet dataset, despite the wide scope of questions. In contrast, our approach employs features derived from networks trained for specialized tasks of scene classification, person activity prediction, and person and object attribute prediction. We also present a method for selecting sub-regions of an image that are relevant for evaluating the appropriateness of a putative answer. Visual features are computed both from the whole image and from local regions, while sentences are mapped to a common space using a simple normalized canonical correlation analysis (CCA) model. Our results show a significant improvement over the previous state of the art, and indicate that answering different question types benefits from examining a variety of image cues and carefully choosing informative image sub-regions.
2016
27th British Machine Vision Conference, BMVC 2016
Computer Science; Computer Vision and Pattern Recognition; Computer Science; Computer Vision and Pattern Recognition
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Solving Visual Madlibs with Multiple Cues / Tommasi, Tatiana; Arun, Mallya; Bryan, Plummer; Svetlana, Lazebnik; Alexander, C. Berg; Tamara, L. Berg. - ELETTRONICO. - (2016), pp. 1-13. (Intervento presentato al convegno 27th British Machine Vision Conference, BMVC 2016 tenutosi a York; United Kingdom) [10.5244/C.30.77].
File allegati a questo prodotto
File Dimensione Formato  
Tommasi_Solving-Visual-Madlibs_2016.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.76 MB
Formato Adobe PDF
1.76 MB Adobe PDF
Tommasi_Abstract_Solving-Visual-Madlibs_2016.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 696.96 kB
Formato Adobe PDF
696.96 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/924126
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? ND
social impact