Page 48 - Spring2020
P. 48
Virtual Sounds
If there is no real-world counterpart, it is even more challeng- ing to create a virtual auditory event that exactly anticipates the sound that will be present when the space is actually built.
Another important feature of VR is that the virtual environ- ment is presented in 3-D, and the user is embedded into the scene and is able to interact with the scene, move in the scene, or even change the environment on the fly. In con- trast, sound engineering for music and films does not provide interaction. The interaction part makes the difference when it comes to VR.
Accordingly, there are requirements concerning processing speed for the digital signal-processing algorithms required for VR. One (for sufficient audio-frequency bandwidth) is that the sampling rate must be greater than the 40 kHz required
to reproduce the frequency of human hearing.
The second requirement (for interaction) is the quick adapta- tion to scene changes that may result from (1) movement of the source and/or receiver, (2) changes in the orientation of one or both, and/or (3) changes in the virtual environment such as if a door is opened or closed. Thus the update rate of the VR environment for simulating a new direction, a change of distance to the sound source, or a new room-acoustic con- dition must be fast enough to provide consistency between the auditory and visual impressions.
Imagine that your VR experience is that you are entering a building from the street. You would expect that the VR system would react in such a way that you would have the sense of being indoors the moment you stepped through the virtual doorway rather than this happening several seconds later. The threshold of perceiving delays in an auditory pre- sentation depends strongly on the type of the sound stimulus. Generally, update rates of greater than 20 Hz (or latencies of less than 50 ms) are considered sufficient for obtaining a smooth and realistic sound image.
So, to have a proper VR environment, we need sound sources and their signals, quick processing of the signals, and 3-D audio. What are the steps for creation of sound in virtual environments?
Workflow in Virtual Acoustics
Source Characterization
Virtual acoustics (VA), like music production, starts by recording or synthesizing sound signals (see article in this
issue of Acoustics Today by Hawley et al. on new approaches to music synthesis). For VA, this step must be designed to enable reproduction of the recorded sound into a 3-D space in flexible settings. Examples for musical sounds captured in 3-D are multichannel recordings and simulations (e.g., Rindel et al., 2004). Behler et al. (2012) measured musical instru- ment directivities that were later optimized and published by Shabtai et al. (2017) for open access (e.g., Figure 2). Bellows and Leishman (2019) studied phoneme-dependent direc- tivities of speech sound, whereas Ackermann et al. (2019) showed the importance of dynamic effects of source move- ments in musical expression. Thus, source characterization of musical instruments and the human voice is far more than a simple recording. Many factors are required, including the spatial radiation pattern stored in appropriate data formats (e.g., spherical harmonic modal decomposition), equaliza- tion filters between the nominal null direction and other directions, and the sound signal (audio stream) in the null direction sampled in digital-audio quality.
If we apply the same concept to noise sources such as moving cars, trains, or aircraft, audio signals and directivities will also be obtained. It is obvious that one cannot record the directional sound signals of a flying jet airplane with a sur- rounding microphone array. But theoretical, computational, or experimental methods can be applied to determine at least an approximated power spectral envelope for the stochas- tic noise components such as jet noise, tire noise, and wind noise. In this case, excellent and very plausible results can be
Figure 2. Thirty-two channel surrounding microphone sphere for simultaneous music recording and instrument directivity measurement.
48 | Acoustics Today | Spring 2020