Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.

Geometric deep learning on graphs and manifolds using mixture model CNNs / Monti, Federico; Boscaini, Davide; Masci, Jonathan; Rodolà, Emanuele; Svoboda, Jan; Bronstein, Michael M.. - 2017- January:(2017), pp. 5425-5434. (Intervento presentato al convegno 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 tenutosi a Honolulu; USA) [10.1109/CVPR.2017.576].

Geometric deep learning on graphs and manifolds using mixture model CNNs

Rodolà, Emanuele;
2017

Abstract

Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.
2017
30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
Computer graphics; Computer vision; Neural networks
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Geometric deep learning on graphs and manifolds using mixture model CNNs / Monti, Federico; Boscaini, Davide; Masci, Jonathan; Rodolà, Emanuele; Svoboda, Jan; Bronstein, Michael M.. - 2017- January:(2017), pp. 5425-5434. (Intervento presentato al convegno 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 tenutosi a Honolulu; USA) [10.1109/CVPR.2017.576].
File allegati a questo prodotto
File Dimensione Formato  
Rodola_Geometric_2017.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.34 MB
Formato Adobe PDF
1.34 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1229059
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1052
  • ???jsp.display-item.citation.isi??? 913
social impact