Page 49 - Spring2020
P. 49
derived from measurements or physical modeling that may result in noise spectra in frequency bands. White noise fil- tered with these spectral features sounds quite similar to, for example, the original jet noise or wind noise. For machines or engines with rotating components or for noise from rolling wheels, a pure-tone synthesis can be added that is controlled by using revolutions per minute (rpm) data. This was applied by Pieren et al. (2017) for a train simulation and in an excel- lent overview by Rizzi (2016) for aircraft noise simulation.
Coming back to musical sounds, the next question is how the musical instrument excites the surrounding space to create a sound field for an immersed listener.
Sound Propagation
The primary sound waves radiated from the source propagate through the environment to the listener. In the context of VA, the function between the excitation at the source and what is received at the listener is an impulse response or, in terms of signal processing, a “filter” transforming sound signals from the source to the listener. It is thus important to deal with sound propagation models as filters, always keeping in mind the audio time and frequency resolution of the source signals and the processing time (latency).
Now, let’s return to the marching band in the Mardi Gras parade. Up to this point, we have all the instrument sounds properly recorded in the free field in 3-D (for the whole band, this requires careful consideration of the players’ interaction and synchronization of the instruments) and we have source directivities for the instruments. In the virtual environment, the music sounds will propagate from the source position in spherical waves and directional weighting into the street canyon, where reflections and scattering will happen until the waves sum up at the listener’s position. The impulse response containing all sound paths connects the source and the receiver. In digital signal processing, the effect of the sound propagation on the source signal is introduced by filtering the sound sources signals with the impulse response.
Of course, at Mardi Gras, both source and listeners are likely to move, which sets a clear violation of time invariance and linear filter processing, both of which are prerequisite for simply filtering the source sound. Slow time variances, however, are usually accepted if stepwise time-invariant filtering with adaptive filters is applied. “Slow” movements simply mean slow compared with the speed of sound. With faster moving sources, such as road or rail vehicles or aircraft, we have to consider the
Doppler effect, which causes a frequency shift depending on the speed of the source related to the medium (normally air). The propagation times can be adapted to the relative move- ment in the trajectories between the source and the receiver by nonuniform resampling to introduce the Doppler shift
(Wefers and Vorländer, 2018).
Thus, the acoustic environment model must include the relevant acoustic features such as reflection, transmission, diffraction, refraction, attenuation, wind speed, and tempera- ture profiles in the atmosphere. In indoor spaces, the primary issues are reflection, transmission, and diffraction. However, for outdoor environments, all effects listed above may be rel- evant. Well-established models for all of these effects have existed for decades, so what’s the challenge in VA? It is real- time performance for audio signal processing!
Computationally expensive wave models and precomputed filters in look-up tables may solve the task of instantaneous adaptation to scene changes, but a more elegant solution is
“real-time simulation” during use of the VA system. As soon as the scene changes, this simulation result is updated within the above-mentioned maximum latency of 50 ms. In the Mardi Gras scene, the band moves and the listener can move as well and/or turn the head as they look from one instrument to another.
For impulse-response calculation in real time, and for the
present time, it is only possible to do approximations of “geometrical acoustics” (Savioja and Svensson, 2015). In
geometrical acoustics, sound paths are constructed by con- necting the source point and the receiver point with a straight line. Curved paths are also possible, as in the case of refrac- tion in the atmosphere or in other layered media. For the sound propagation outside the direct path via the direct line of sight, reflection, scattering, and diffraction may lead to physically consistent paths. In this case, the computer must find those paths by checking their validity. This is as if one tries to target a billiard ball by bouncing another ball off a side wall (“cushion”).
Striking balls in various directions and detecting which ones hit the target is called “ray tracing,” a method used in both optics and acoustics. Ray tracing can also handle scattering when the sound interacts with bumpy walls. This means that the ball is “reflected” into odd directions and is not consistent with Snell’s law (angle of incidence equals angle of reflec- tion) because that only holds for smooth walls. In ray-tracing
Spring 2020 | Acoustics Today | 49