Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction. Current renderers use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape. These models require a rendering step, along with visibility handling and evaluation of the shading model. The main goal of this paper is to demonstrate that we can avoid these steps and still get reconstruction results as other state-of-the-art models that are equal or even better than existing category-specific reconstruction methods. First, we use the same CNN architecture for the prediction of a point cloud shape and pose prediction like the one used by Insafutdinov & Dosovitskiy. Secondly, we propose the novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground-truth object’s silhouette. Then we use Poisson Surface Reconstruction to transform the reconstructed point cloud into a 3D mesh. Finally, we perform a GAN-based texture mapping on a particular 3D mesh and produce a textured 3D mesh from a single 2D image. We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.

An Effective Loss Function for Generating 3D Models from Single 2D Image Without Rendering / Zubic, N.; Lio, P.. - 627:(2021), pp. 309-322. (Intervento presentato al convegno 17th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2021 tenutosi a Virtual, Online) [10.1007/978-3-030-79150-6_25].

An Effective Loss Function for Generating 3D Models from Single 2D Image Without Rendering

Lio P.
2021

Abstract

Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction. Current renderers use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape. These models require a rendering step, along with visibility handling and evaluation of the shading model. The main goal of this paper is to demonstrate that we can avoid these steps and still get reconstruction results as other state-of-the-art models that are equal or even better than existing category-specific reconstruction methods. First, we use the same CNN architecture for the prediction of a point cloud shape and pose prediction like the one used by Insafutdinov & Dosovitskiy. Secondly, we propose the novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground-truth object’s silhouette. Then we use Poisson Surface Reconstruction to transform the reconstructed point cloud into a 3D mesh. Finally, we perform a GAN-based texture mapping on a particular 3D mesh and produce a textured 3D mesh from a single 2D image. We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
2021
17th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2021
3D reconstruction; Single-view 3D reconstruction
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
An Effective Loss Function for Generating 3D Models from Single 2D Image Without Rendering / Zubic, N.; Lio, P.. - 627:(2021), pp. 309-322. (Intervento presentato al convegno 17th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2021 tenutosi a Virtual, Online) [10.1007/978-3-030-79150-6_25].
File allegati a questo prodotto
File Dimensione Formato  
Zubic_preprint_An-Effective_2021.pdf

accesso aperto

Note: https://link.springer.com/chapter/10.1007/978-3-030-79150-6_25
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 3.22 MB
Formato Adobe PDF
3.22 MB Adobe PDF
Zubic_An-Effective_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 5.37 MB
Formato Adobe PDF
5.37 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1720241
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 9
social impact