Motion blur is a severe problem in images grabbed by legged robots and, in particular, by small humanoid robots. Standard feature extraction and tracking approaches typically fail when applied to sequences of images strongly affected by motion blur. In this paper, we propose a new feature detection and tracking scheme that is robust even to nonuniform motion blur. Furthermore, we developed a framework for visual odometry based on features extracted out of and matched in monocular image sequences. To reliably extract and track the features, we estimate the point spread function (PSF) of the motion blur individually for image patches obtained via a clustering technique and only consider highly distinctive features during matching. We present experiments performed on standard datasets corrupted with motion blur and on images taken by a camera mounted on walking small humanoid robots to show the effectiveness of our approach. The experiments demonstrate that our technique is able to reliably extract and match features and that it is furthermore able to generate a correct visual odometry, even in presence of strong motion blur effects and without the aid of any inertial measurement sensor. © 2009 IEEE.

A visual odometry framework robust to motion blur / Pretto, Alberto; Menegatti, Emanuele; Bennewitz, Maren; Burgard, Wolfram; Pagello, Enrico. - STAMPA. - (2009), pp. 2250-2257. (Intervento presentato al convegno 2009 IEEE International Conference on Robotics and Automation, ICRA '09 tenutosi a Kobe; Japan nel 12-17 May 2009) [10.1109/ROBOT.2009.5152447].

A visual odometry framework robust to motion blur

PRETTO, ALBERTO;
2009

Abstract

Motion blur is a severe problem in images grabbed by legged robots and, in particular, by small humanoid robots. Standard feature extraction and tracking approaches typically fail when applied to sequences of images strongly affected by motion blur. In this paper, we propose a new feature detection and tracking scheme that is robust even to nonuniform motion blur. Furthermore, we developed a framework for visual odometry based on features extracted out of and matched in monocular image sequences. To reliably extract and track the features, we estimate the point spread function (PSF) of the motion blur individually for image patches obtained via a clustering technique and only consider highly distinctive features during matching. We present experiments performed on standard datasets corrupted with motion blur and on images taken by a camera mounted on walking small humanoid robots to show the effectiveness of our approach. The experiments demonstrate that our technique is able to reliably extract and match features and that it is furthermore able to generate a correct visual odometry, even in presence of strong motion blur effects and without the aid of any inertial measurement sensor. © 2009 IEEE.
2009
2009 IEEE International Conference on Robotics and Automation, ICRA '09
Control and Systems Engineering; Software; Artificial Intelligence; Electrical and Electronic Engineering
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
A visual odometry framework robust to motion blur / Pretto, Alberto; Menegatti, Emanuele; Bennewitz, Maren; Burgard, Wolfram; Pagello, Enrico. - STAMPA. - (2009), pp. 2250-2257. (Intervento presentato al convegno 2009 IEEE International Conference on Robotics and Automation, ICRA '09 tenutosi a Kobe; Japan nel 12-17 May 2009) [10.1109/ROBOT.2009.5152447].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/951423
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 68
  • ???jsp.display-item.citation.isi??? ND
social impact