The recent surge in 3D data acquisition has spurred the development of geometric deep learning models for point cloud processing, boosted by the remarkable success of transformers in natural language processing. While point cloud transformers (PTs) have achieved impressive results recently, their quadratic scaling with respect to the point cloud size poses a significant scalability challenge for real-world applications. To address this issue, we propose the Adaptive Point Cloud Transformer (AdaPT), a standard PT model augmented by an adaptive token selection mechanism. AdaPT dynamically reduces the number of tokens during inference, enabling efficient processing of large point clouds. Furthermore, we introduce a budget mechanism to flexibly adjust the computational cost of the model at inference time without the need for retraining or fine-tuning separate models. Our extensive experimental evaluation on point cloud classification tasks demonstrates that AdaPT significantly reduces computational complexity while maintaining competitive accuracy compared to standard PTs. The code for AdaPT is publicly available at https://github.com/ispamm/adaPT.

Adaptive token selection for scalable point cloud transformers / Baiocchi, Alessandro; Spinelli, Indro; Nicolosi, Alessandro; Scardapane, Simone. - In: NEURAL NETWORKS. - ISSN 0893-6080. - 188:(2025). [10.1016/j.neunet.2025.107477]

Adaptive token selection for scalable point cloud transformers

Alessandro Baiocchi;Indro Spinelli;Simone Scardapane
2025

Abstract

The recent surge in 3D data acquisition has spurred the development of geometric deep learning models for point cloud processing, boosted by the remarkable success of transformers in natural language processing. While point cloud transformers (PTs) have achieved impressive results recently, their quadratic scaling with respect to the point cloud size poses a significant scalability challenge for real-world applications. To address this issue, we propose the Adaptive Point Cloud Transformer (AdaPT), a standard PT model augmented by an adaptive token selection mechanism. AdaPT dynamically reduces the number of tokens during inference, enabling efficient processing of large point clouds. Furthermore, we introduce a budget mechanism to flexibly adjust the computational cost of the model at inference time without the need for retraining or fine-tuning separate models. Our extensive experimental evaluation on point cloud classification tasks demonstrates that AdaPT significantly reduces computational complexity while maintaining competitive accuracy compared to standard PTs. The code for AdaPT is publicly available at https://github.com/ispamm/adaPT.
2025
Geometric deep learning; Gumbel-Softmax; Point clouds; Token selection; Transformer
01 Pubblicazione su rivista::01a Articolo in rivista
Adaptive token selection for scalable point cloud transformers / Baiocchi, Alessandro; Spinelli, Indro; Nicolosi, Alessandro; Scardapane, Simone. - In: NEURAL NETWORKS. - ISSN 0893-6080. - 188:(2025). [10.1016/j.neunet.2025.107477]
File allegati a questo prodotto
File Dimensione Formato  
Baiocchi_Adaptiv-token_2025.pdf

accesso aperto

Note: https://doi.org/10.1016/j.neunet.2025.107477
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.5 MB
Formato Adobe PDF
1.5 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1753810
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact