Page 50 - Spring2020
P. 50

Virtual Sounds
algorithms, this is accounted for by distributing the energy according to the physical properties of the boundary, which are the absorption coefficient (α) and the scattering coeffi- cient (s). Needless to say, this also works if the billiard ball hits several walls sequentially.
The sound path can also be constructed from knowledge of the wall position by applying the “image source method.” Here, the exact point of reflection of the ball is marked with the aid of an image source so that this position can be per- fectly targeted. This method, however, excludes scattering phenomena. This is a pity because it is known that acoustic scattering is a very strong effect that mostly dominates the sound propagation after a few consecutive wall hits. This is why the image source method has very limited relevance for accurate room-acoustic modeling.
The methods used in current practice are hybrid models of image source and ray-tracing algorithms to account for specular reflections and surface scattering, respectively. Good diffraction models are available, such as those based on the uniform theory of diffraction (UTD) by Biot and Tolstoy (1957) and implemented by (Torres et al., 2001). Geometrical acoustics models can thus deliver filter updates in real time (Savioja and Svensson, 2015).
A larger problem, however, is the uncertainty of input data about the environment. Boundary impedances of indoor and outdoor spaces, atmospheric conditions, fluctuations of wind, and turbulence are usually larger influences than numerical prediction errors. The main question that remains is whether these uncertainties are audible. Vorländer (2013) showed that uncertainties of absorption coefficients in room acoustics are above the just-noticeable differences (JNDs) of classical room-acoustic perceptual quantities. Research for decades has tried, without sufficient success to date, to reduce the measurement uncertainty of sound absorption coefficients. We have good sound propagation models, but there is a need for more research on acoustic material models and test methods!
Finally, with an appropriate propagation filter, the primary source signal can be converted into a receiver-related signal because it would be shaped, delayed, or duplicated by reflec- tion by sound propagation in the environment. Now, we just need to present the virtual acoustic scene properly to a human observer.
We Live in a Three-Dimensional World
The listener perception in the real world is three-dimensional, and hence the 3-D sensation is present in any auditory event. Sound sources radiate into the 3-D space and so the virtual environment must be a 3-D sound field solver with 3-D sound pressure output signals that can be experienced by the listener. The type of 3-D coding of the sound arriving at the receiver depends on the sound reproduction system that is used. Several systems in professional audio engineering are directly applicable, including binaural technology produced with headphones or transaural loudspeakers. Headphones are the most popular solution despite drawbacks concerning the lack of exact headphone calibration when used by differ- ent individual listeners. Nonperfect headphone calibration may reduce the sensation of “true” 3-D sound down to the extreme case of front-back confusion or in-head localiza- tion, but there are very good solutions available (Lindau and Brinkmann, 2012).
Transaural (stereo) loudspeakers do provide sound pressure signals at the eardrum that are exact copies of the binaural signals in the virtual scene. But to achieve a clean binaural sound for the right and the left ear separately, the crosstalk between the left loudspeaker to the right ear and vice versa needs to be compensated for, a problem that obviously does not occur with headphones.
Binaural 3-D sound reproduction by headphones must include tracking of the head orientation relative to the direc- tion of the virtual sound; otherwise the virtual sound would follow the head movements (such as head shaking). This does not happen in real-world listening. Therefore, head orien- tations must be determined by using head-tracking devices such as gyroscopic or infrared sensors.
If there is more than one listener experiencing a 3-D sound, it is necessary to not only allow for binaural eardrum signals but also for wave fields around the listeners. This is usually done by surrounding the listeners with loudspeaker arrays. The loudspeakers are controlled to simulate wave fronts that carry the information of the virtual sound components con- cerning amplitude, time of arrival, and direction.
Ambisonics, which is a surround sound format that produces signals not only on the horizontal plane but also above and below the listener, was introduced by Gerzon (1985). It is a rather popular technology used in 3-D audio recording and
50 | Acoustics Today | Spring 2020






















































































   48   49   50   51   52