Recovering 3D geometry from 2D observations is a fundamental challenge in computer vision, with applications in animation, virtual reality, and robotics. Recent advances in differentiable rendering have enabled gradient-based optimization of 3D shapes using only image supervision. In this work, we propose a novel adversarial framework that enhances 3D mesh deformation by integrating a differentiable renderer into a Generative Adversarial Network (GAN). The generator deforms an initial mesh and optimizes textures to match 2D supervision from target images, while the discriminator-featuring dense connections and self-attention-learns to distinguish between real and synthesized renderings. Our method improves upon baseline differentiable renderers both quantitatively and qualitatively, achieving lower Chamfer distance and higher Intersection over Union (IoU) across a variety of object categories. The results demonstrate that adversarial training effectively guides mesh deformation, producing reconstructions that are more accurate and visually consistent with target images.
Adversarially-Guided 3D Shape Deformation via Differentiable Rendering and 2D Supervision / Gevasio, A.; Napoli, C.; Nieszporek, K.. - 3992:(2025), pp. 67-74. ( 11th Sapienza Yearly Symposium of Technology, Engineering and Mathematics, SYSTEM 2025 Roma; Italy ).
Adversarially-Guided 3D Shape Deformation via Differentiable Rendering and 2D Supervision
Napoli C.
Penultimo
Supervision
;
2025
Abstract
Recovering 3D geometry from 2D observations is a fundamental challenge in computer vision, with applications in animation, virtual reality, and robotics. Recent advances in differentiable rendering have enabled gradient-based optimization of 3D shapes using only image supervision. In this work, we propose a novel adversarial framework that enhances 3D mesh deformation by integrating a differentiable renderer into a Generative Adversarial Network (GAN). The generator deforms an initial mesh and optimizes textures to match 2D supervision from target images, while the discriminator-featuring dense connections and self-attention-learns to distinguish between real and synthesized renderings. Our method improves upon baseline differentiable renderers both quantitatively and qualitatively, achieving lower Chamfer distance and higher Intersection over Union (IoU) across a variety of object categories. The results demonstrate that adversarial training effectively guides mesh deformation, producing reconstructions that are more accurate and visually consistent with target images.| File | Dimensione | Formato | |
|---|---|---|---|
|
Gevasio_Adversarially-Guided_2025.pdf
accesso aperto
Note: https://ceur-ws.org/Vol-3992/p08.pdf
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
1.67 MB
Formato
Adobe PDF
|
1.67 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


