Traditional place categorization approaches in robot vision assume that training and test images have similar visual appearance. Therefore, any seasonal, illumination, and environmental changes typically lead to severe degradation in performance. To cope with this problem, recent works have been proposed to adopt domain adaptation techniques. While effective, these methods assume that some prior information about the scenario where the robot will operate is available at training time. Unfortunately, in many cases, this assumption does not hold, as we often do not know where a robot will be deployed. To overcome this issue, in this paper, we present an approach that aims at learning classification models able to generalize to unseen scenarios. Specifically, we propose a novel deep learning framework for domain generalization. Our method develops from the intuition that, given a set of different classification models associated to known domains (e.g., corresponding to multiple environments, robots), the best model for a new sample in the novel domain can be computed directly at test time by optimally combining the known models. To implement our idea, we exploit recent advances in deep domain adaptation and design a convolutional neural network architecture with novel layers performing a weighted version of batch normalization. Our experiments, conducted on three common datasets for robot place categorization, confirm the validity of our contribution.

Robust Place Categorization With Deep Domain Generalization / Mancini, Massimiliano; Rota Bulò, Samuel; Caputo, Barbara; Ricci, Elisa. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 3:3(2018), pp. 2093-2100. [10.1109/LRA.2018.2809700]

Robust Place Categorization With Deep Domain Generalization

Massimiliano Mancini
Primo
;
Barbara Caputo
Penultimo
;
2018

Abstract

Traditional place categorization approaches in robot vision assume that training and test images have similar visual appearance. Therefore, any seasonal, illumination, and environmental changes typically lead to severe degradation in performance. To cope with this problem, recent works have been proposed to adopt domain adaptation techniques. While effective, these methods assume that some prior information about the scenario where the robot will operate is available at training time. Unfortunately, in many cases, this assumption does not hold, as we often do not know where a robot will be deployed. To overcome this issue, in this paper, we present an approach that aims at learning classification models able to generalize to unseen scenarios. Specifically, we propose a novel deep learning framework for domain generalization. Our method develops from the intuition that, given a set of different classification models associated to known domains (e.g., corresponding to multiple environments, robots), the best model for a new sample in the novel domain can be computed directly at test time by optimally combining the known models. To implement our idea, we exploit recent advances in deep domain adaptation and design a convolutional neural network architecture with novel layers performing a weighted version of batch normalization. Our experiments, conducted on three common datasets for robot place categorization, confirm the validity of our contribution.
2018
convolution;feedforward neural nets;generalisation (artificial intelligence);image classification;learning (artificial intelligence);robot vision;deep learning framework;robots;deep domain adaptation;robot place categorization;robust place categorization;deep domain generalization;robot vision;test images;seasonal illumination;classification models;visual appearance;domain adaptation;convolutional neural network architecture;place categorization;Robots;Training;Data models;Computational modeling;Adaptation models;Visualization;Semantics;Recognition;visual learning;semantic scene understanding
01 Pubblicazione su rivista::01a Articolo in rivista
Robust Place Categorization With Deep Domain Generalization / Mancini, Massimiliano; Rota Bulò, Samuel; Caputo, Barbara; Ricci, Elisa. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 3:3(2018), pp. 2093-2100. [10.1109/LRA.2018.2809700]
File allegati a questo prodotto
File Dimensione Formato  
Mancini_Robust-Place-Categorization_2018.pdf

accesso aperto

Note: https://ieeexplore.ieee.org/document/8302933
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.04 MB
Formato Adobe PDF
1.04 MB Adobe PDF
Mancini_Robust-Place-Categorization_2018.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 600.97 kB
Formato Adobe PDF
600.97 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1189848
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 40
  • ???jsp.display-item.citation.isi??? ND
social impact