In the electronic musical instrument scenario, the current paradigm of sound modification during live performance is predominantly based on the use of external control mechanisms to adjust sound configurations predefined by the performer. However, this approach is limited by the introduction of marginal latencies during the transition between sound configurations. To overcome these limitations, this study introduces a novel application of Brain-Computer Interface (BCI) technology in a control system environment for musical instruments during live performances. The proposed system exploits classification between mental states of activation and relaxation, employing a Machine Learning (ML) system that achieves an average Accuracy of 0.92. Using Beta Protocol, the system allows dynamic modulation of sound according to the mental state of the performer. Finally, an explainability analysis was performed to clarify the impact of specific features during the prediction process.

Neural Musical Instruments through Brain-Computer Interface and Biofeedback / Colafiglio, T.; Lofu, D.; Sorino, P.; Lombardi, A.; Narducci, F.; Di Noia, T.. - (2025), pp. 489-494. ( 33rd Conference on User Modeling, Adaptation and Personalization, UMAP 2025 New York; USA ) [10.1145/3708319.3733644].

Neural Musical Instruments through Brain-Computer Interface and Biofeedback

Colafiglio T.
Conceptualization
;
Narducci F.
Membro del Collaboration Group
;
2025

Abstract

In the electronic musical instrument scenario, the current paradigm of sound modification during live performance is predominantly based on the use of external control mechanisms to adjust sound configurations predefined by the performer. However, this approach is limited by the introduction of marginal latencies during the transition between sound configurations. To overcome these limitations, this study introduces a novel application of Brain-Computer Interface (BCI) technology in a control system environment for musical instruments during live performances. The proposed system exploits classification between mental states of activation and relaxation, employing a Machine Learning (ML) system that achieves an average Accuracy of 0.92. Using Beta Protocol, the system allows dynamic modulation of sound according to the mental state of the performer. Finally, an explainability analysis was performed to clarify the impact of specific features during the prediction process.
2025
33rd Conference on User Modeling, Adaptation and Personalization, UMAP 2025
Artificial Intelligence; Brain-Machine Interface; Explainable AI; Neural Instrument
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Neural Musical Instruments through Brain-Computer Interface and Biofeedback / Colafiglio, T.; Lofu, D.; Sorino, P.; Lombardi, A.; Narducci, F.; Di Noia, T.. - (2025), pp. 489-494. ( 33rd Conference on User Modeling, Adaptation and Personalization, UMAP 2025 New York; USA ) [10.1145/3708319.3733644].
File allegati a questo prodotto
File Dimensione Formato  
Colafiglio_Neural_Musical_2025.pdf

accesso aperto

Note: https://dl.acm.org/doi/pdf/10.1145/3708319.3733644
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 1.27 MB
Formato Adobe PDF
1.27 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1755073
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact