In electroacoustic music composition, particularly in sound synthesis techniques, Deep Learning (DL) provides very effective solutions. However, these architectures generally have a high level of automation and use textual language for human interaction. To improve the relationship between composers and artificial intelligence systems, brain-computer interfaces (BCIs) are an effective and direct systems, which have led to considerable improvements in this area. The proposed system employs emotion recognition through electroencephalogram (EEG) signals to control four Variational Autoencoders (VAE) that generate new sound textures. A dataset was acquired using the MUSE2 headset to train four Machine Learning (ML) models capable of classifying human emotions based on Russell's circumplex model. VAEs were trained to produce different sound variations from an audio dataset that allows composers to integrate their sounds. In addition, a graphical user interface (GUI) was developed to facilitate the real-time generation of sound textures, with the support of an external MIDI controller. This GUI continuously provides visual information about the detected emotions and the activity of the left and right brain hemispheres.

EmoSynth Real Time Emotion-Driven Sound Texture Synthesis via Brain-Computer Interface / Colafiglio, T.; Lofu, D.; Sorino, P.; Lombardi, A.; Narducci, F.; Festa, F.; Di Noia, T.. - (2024), pp. 616-621. (Intervento presentato al convegno 32nd ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2024 tenutosi a Cagliari; Italia) [10.1145/3631700.3665196].

EmoSynth Real Time Emotion-Driven Sound Texture Synthesis via Brain-Computer Interface

Colafiglio T.
Primo
;
2024

Abstract

In electroacoustic music composition, particularly in sound synthesis techniques, Deep Learning (DL) provides very effective solutions. However, these architectures generally have a high level of automation and use textual language for human interaction. To improve the relationship between composers and artificial intelligence systems, brain-computer interfaces (BCIs) are an effective and direct systems, which have led to considerable improvements in this area. The proposed system employs emotion recognition through electroencephalogram (EEG) signals to control four Variational Autoencoders (VAE) that generate new sound textures. A dataset was acquired using the MUSE2 headset to train four Machine Learning (ML) models capable of classifying human emotions based on Russell's circumplex model. VAEs were trained to produce different sound variations from an audio dataset that allows composers to integrate their sounds. In addition, a graphical user interface (GUI) was developed to facilitate the real-time generation of sound textures, with the support of an external MIDI controller. This GUI continuously provides visual information about the detected emotions and the activity of the left and right brain hemispheres.
2024
32nd ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2024
Artificial Intelligence; Brain-Machine Interface; Explainable AI; Neural Instrument
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
EmoSynth Real Time Emotion-Driven Sound Texture Synthesis via Brain-Computer Interface / Colafiglio, T.; Lofu, D.; Sorino, P.; Lombardi, A.; Narducci, F.; Festa, F.; Di Noia, T.. - (2024), pp. 616-621. (Intervento presentato al convegno 32nd ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2024 tenutosi a Cagliari; Italia) [10.1145/3631700.3665196].
File allegati a questo prodotto
File Dimensione Formato  
Colafiglio_EmoSynth_2024.pdf

accesso aperto

Note: https://doi.org/10.1145/3631700.3665196
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 982.36 kB
Formato Adobe PDF
982.36 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1727066
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact