The presence of a bias in each image data collection has recently attracted a lot of attention in the computer vision community showing the limits in generalization of any learning method trained on a specific dataset. At the same time, with the rapid development of deep learning architectures, the activation values of Convolutional Neural Networks (CNN) are emerging as reliable and robust image descriptors. In this chapter we propose to verify the potential of the CNN features when facing the dataset bias problem. With this purpose we introduce a large testbed for cross-dataset analysis and we discuss the challenges faced to create two comprehensive experimental setups by aligning twelve existing image databases. We conduct a series of analyses looking at how the datasets differ among each other and verifying the performance of existing debiasing methods under different representations. We learn important lessons on which part of the dataset bias problem can be considered solved and which open questions still need to be tackled. © Springer International Publishing AG 2017.
A Deeper Look at Dataset Bias / Tommasi, Tatiana; Patricia, Novi; Caputo, Barbara; Tuytelaars, Tinne. - (2017), pp. 37-55. - ADVANCES IN COMPUTER VISION AND PATTERN RECOGNITION. [10.1007/978-3-319-58347-1_2].
A Deeper Look at Dataset Bias
Tommasi, Tatiana
;Patricia, Novi;Caputo, Barbara;
2017
Abstract
The presence of a bias in each image data collection has recently attracted a lot of attention in the computer vision community showing the limits in generalization of any learning method trained on a specific dataset. At the same time, with the rapid development of deep learning architectures, the activation values of Convolutional Neural Networks (CNN) are emerging as reliable and robust image descriptors. In this chapter we propose to verify the potential of the CNN features when facing the dataset bias problem. With this purpose we introduce a large testbed for cross-dataset analysis and we discuss the challenges faced to create two comprehensive experimental setups by aligning twelve existing image databases. We conduct a series of analyses looking at how the datasets differ among each other and verifying the performance of existing debiasing methods under different representations. We learn important lessons on which part of the dataset bias problem can be considered solved and which open questions still need to be tackled. © Springer International Publishing AG 2017.File | Dimensione | Formato | |
---|---|---|---|
Tommasi_A-Deeper-Look _2017.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.52 MB
Formato
Adobe PDF
|
1.52 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.