Data-efficient image classification using deep neural networks in settings, where only small amounts of labeled data are available, has been an active research area in the recent past. However, an objective comparison between published methods is difficult, since existing works use different datasets for evaluation and often compare against un-tuned baselines with default hyper-parameters. We design a benchmark for data-efficient image classification consisting of six diverse datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). Using this benchmark, we re-evaluate the standard cross-entropy baseline and eight methods for data-efficient deep learning published between 2017 and 2021 at renowned venues. For a fair and realistic comparison, we carefully tune the hyper-parameters of all methods on each dataset. Surprisingly, we find that tuning learning rate, weight decay, and batch size on a separate validation split results in a highly competitive baseline, which outperforms all but one specialized method and performs competitively to the remaining one.

Tune It or Don't Use It: Benchmarking Data-Efficient Image Classification / Brigato, L.; Barz, B.; Iocchi, L.; Denzler, J.. - (2021), pp. 1071-1080. (Intervento presentato al convegno 18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021 tenutosi a Napoli; Italia) [10.1109/ICCVW54120.2021.00125].

Tune It or Don't Use It: Benchmarking Data-Efficient Image Classification

Brigato L.;Iocchi L.;
2021

Abstract

Data-efficient image classification using deep neural networks in settings, where only small amounts of labeled data are available, has been an active research area in the recent past. However, an objective comparison between published methods is difficult, since existing works use different datasets for evaluation and often compare against un-tuned baselines with default hyper-parameters. We design a benchmark for data-efficient image classification consisting of six diverse datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). Using this benchmark, we re-evaluate the standard cross-entropy baseline and eight methods for data-efficient deep learning published between 2017 and 2021 at renowned venues. For a fair and realistic comparison, we carefully tune the hyper-parameters of all methods on each dataset. Surprisingly, we find that tuning learning rate, weight decay, and batch size on a separate validation split results in a highly competitive baseline, which outperforms all but one specialized method and performs competitively to the remaining one.
2021
18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021
machine learning; image classification; small datasets
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Tune It or Don't Use It: Benchmarking Data-Efficient Image Classification / Brigato, L.; Barz, B.; Iocchi, L.; Denzler, J.. - (2021), pp. 1071-1080. (Intervento presentato al convegno 18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021 tenutosi a Napoli; Italia) [10.1109/ICCVW54120.2021.00125].
File allegati a questo prodotto
File Dimensione Formato  
Brigato_Tune_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.72 MB
Formato Adobe PDF
2.72 MB Adobe PDF   Contatta l'autore
Brigato_preprint_Tune_2021.pdf

accesso aperto

Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 2.68 MB
Formato Adobe PDF
2.68 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1618087
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 3
social impact