In this paper we propose a method to simulate a 3D acoustical environment in which sound sources are positioned in well defined sides. Our method is real-time applications oriented, due to the low computational cost of the implemented operations. The spatial position that the human brain assigns to a sound is influenced mainly by the differences between the sound signals that reach the listener's ears, related to the sound source angulation with respect to the listener's head. The reverberation effect, on the other side, depends on the type of environment. All this elements have to be simulated in order to give the illusion that a sound comes from a particular position in a particular environment. To obtain this result, we perform a suitable sound processing, that can be separated in two main tasks: reverberation and spatialization. The first one is mainly related to the environment itself: it depends on the shape of the environment and on the absorption coefficients of the walls. This is the most computational intensive component, if we want to reproduce it accurately, so we approximate it by an adaptive IIR filter. By the spatialization, the listener hears the sound as coming from a particular direction. This task, carried out by using the head related transfer functions (HRTFs), has to be applied to every sound source differently.
Adaptive room acoustic response simulation: a virtual 3D application / G., Costantini; D., Casali; Uncini, Aurelio. - STAMPA. - (2003), pp. 667-676. (Intervento presentato al convegno 2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718) tenutosi a Toulouse, France nel 17-19 Sept.) [10.1109/nnsp.2003.1318066].
Adaptive room acoustic response simulation: a virtual 3D application
UNCINI, Aurelio
2003
Abstract
In this paper we propose a method to simulate a 3D acoustical environment in which sound sources are positioned in well defined sides. Our method is real-time applications oriented, due to the low computational cost of the implemented operations. The spatial position that the human brain assigns to a sound is influenced mainly by the differences between the sound signals that reach the listener's ears, related to the sound source angulation with respect to the listener's head. The reverberation effect, on the other side, depends on the type of environment. All this elements have to be simulated in order to give the illusion that a sound comes from a particular position in a particular environment. To obtain this result, we perform a suitable sound processing, that can be separated in two main tasks: reverberation and spatialization. The first one is mainly related to the environment itself: it depends on the shape of the environment and on the absorption coefficients of the walls. This is the most computational intensive component, if we want to reproduce it accurately, so we approximate it by an adaptive IIR filter. By the spatialization, the listener hears the sound as coming from a particular direction. This task, carried out by using the head related transfer functions (HRTFs), has to be applied to every sound source differently.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.