The primary goal of the L3DAS23 Signal Processing Grand Challenge at ICASSP 2023 is to promote and support collaborative research on machine learning for 3D audio signal processing, with a specific emphasis on 3D speech enhancement and 3D Sound Event Localization and Detection in Extended Reality applications. As part of our latest competition, we provide a brand-new dataset, which maintains the same general characteristics of the L3DAS21 and L3DAS22 datasets, but with first-order Ambisonics recordings from multiple reverberant simulated environments. Moreover, we start exploring an audio-visual scenario by providing images of these environments, as perceived by the different microphone positions and orientations. We also propose updated baseline models for both tasks that can now support audio-image couples as input and a supporting API to replicate our results. Finally, we present the results of the participants. Further details about the challenge are available at www.l3das.com/icassp2023.

Overview of the L3DAS23 challenge on audio-visual extended reality / Marinoni, Christian; Gramaccioni, Riccardo F.; Chen, Changan; Uncini, Aurelio; Comminiello, Danilo. - (2023). (Intervento presentato al convegno 48th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2023 tenutosi a Rhodes Island; Greece) [10.1109/icassp49357.2023.10433925].

Overview of the L3DAS23 challenge on audio-visual extended reality

Marinoni, Christian;Gramaccioni, Riccardo F.;Uncini, Aurelio;Comminiello, Danilo
2023

Abstract

The primary goal of the L3DAS23 Signal Processing Grand Challenge at ICASSP 2023 is to promote and support collaborative research on machine learning for 3D audio signal processing, with a specific emphasis on 3D speech enhancement and 3D Sound Event Localization and Detection in Extended Reality applications. As part of our latest competition, we provide a brand-new dataset, which maintains the same general characteristics of the L3DAS21 and L3DAS22 datasets, but with first-order Ambisonics recordings from multiple reverberant simulated environments. Moreover, we start exploring an audio-visual scenario by providing images of these environments, as perceived by the different microphone positions and orientations. We also propose updated baseline models for both tasks that can now support audio-image couples as input and a supporting API to replicate our results. Finally, we present the results of the participants. Further details about the challenge are available at www.l3das.com/icassp2023.
2023
48th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2023
3D Audio; ambisonics; extended reality; sound event localization and detection; speech enhancement
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Overview of the L3DAS23 challenge on audio-visual extended reality / Marinoni, Christian; Gramaccioni, Riccardo F.; Chen, Changan; Uncini, Aurelio; Comminiello, Danilo. - (2023). (Intervento presentato al convegno 48th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2023 tenutosi a Rhodes Island; Greece) [10.1109/icassp49357.2023.10433925].
File allegati a questo prodotto
File Dimensione Formato  
Marinoni_Overview_2023.pdf

solo gestori archivio

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.22 MB
Formato Adobe PDF
3.22 MB Adobe PDF   Contatta l'autore
Marinoni_Frontespizio_Overview_2023.pdf

solo gestori archivio

Note: Frontespizio
Tipologia: Altro materiale allegato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.64 MB
Formato Adobe PDF
3.64 MB Adobe PDF   Contatta l'autore
Marinoni_Indice_Overview_2023.pdf

solo gestori archivio

Note: Indice
Tipologia: Altro materiale allegato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 136.73 kB
Formato Adobe PDF
136.73 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1714441
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact