Multi-modality is a fundamental feature that characterizes biological systems and lets them achieve high robustness in understanding skills while coping with uncertainty. Relatively recent studies showed that multi-modal learning is a potentially effective add-on to artificial systems, allowing the transfer of information from one modality to another. In this paper we propose a general architecture for jointly learning visual and motion patterns: by means of regression theory we model a mapping between the two sensorial modalities improving the performance of artificial perceptive systems. We present promising results on a case study of grasp classification in a controlled setting and discuss future developments. © 2009 Springer Berlin Heidelberg.
Towards a theoretical framework for learning multi-modal patterns for embodied agents / Noceti, Nicoletta; Caputo, Barbara; Castellini, Claudio; Baldassarre, Luca; Barla, Annalisa; Rosasco, Lorenzo; Odone, Francesca; Sandini, Giulio. - STAMPA. - 5716:(2009), pp. 239-248. (Intervento presentato al convegno 15th International Conference on Image Analysis and Processing - ICIAP 2009, Proceedings tenutosi a Vietri sul Mare; Italy nel 08-11 September 2009) [10.1007/978-3-642-04146-4_27].
Towards a theoretical framework for learning multi-modal patterns for embodied agents
CAPUTO, BARBARA;
2009
Abstract
Multi-modality is a fundamental feature that characterizes biological systems and lets them achieve high robustness in understanding skills while coping with uncertainty. Relatively recent studies showed that multi-modal learning is a potentially effective add-on to artificial systems, allowing the transfer of information from one modality to another. In this paper we propose a general architecture for jointly learning visual and motion patterns: by means of regression theory we model a mapping between the two sensorial modalities improving the performance of artificial perceptive systems. We present promising results on a case study of grasp classification in a controlled setting and discuss future developments. © 2009 Springer Berlin Heidelberg.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.