Page 19 - Winter 2011
P. 19

 CLARITY, COCKTAILS, AND CONCERTS: LISTENING IN CONCERT HALLS
David Griesinger
221 Mt. Auburn Street, #107 Cambridge, Massachusetts 02138
 How can we decode the complexities of music with only two ears?
The key to understanding musical
acoustics lies in the extraordinary
ability of the human ear and
brain to extract a wealth of precise
information from a complex and often
chaotic sound field. A human ear has
only about 3,500 sound-sensing hair cells, each capable of firing no faster than 1000 times per second. They are attached to a frequency sensitive mechanical filter with a selectivity of about one part in five. Yet with this meager data we can tune instruments to one part in a thousand, choose to listen to any one of several simultaneous conver- sations in a noisy room (the cocktail party effect), or know which instrument played each note in a string quartet.
The ability to separate individual sources from a com- plex sound field has limits. When noise and reverberation are too strong the ability to hear more than one conversation or more than one musical line vanishes. We can no longer per- ceive all the complexity of music and our attention is more likely to wander. This article uses clues from physics and our perception of music to understand these limits, and how they influence concert hall design.
Sorting sound waves into sound events such as notes and syllables and then assembling these events into coherent streams with similar pitch, timbre, location, and distance is the job of specialized organs in the brain stem—the oldest part of our neurology. Much has been learned about the function of these organs with animals, and much is still mys- terious. But our ability to hear music gives powerful clues to how the mechanisms work. From these clues we can begin to understand the physics of the process; how information about timbre and localization is encoded in sound waves, how the ear and brain extract this information, and how, and to what extent, reflections and noise interfere.
The brain stem works at a subconscious level. The process of sorting sound into many simultaneous foreground and background sound streams that can be assembled into a meaningful image is automatic; we cannot influence it by thought. The streams are passed upward to consciousness fully formed. There they are processed in ways well beyond the scope of this article. But with music and physics as our guide we can start to make sense of the processes going on in the brain’s subconscious realm.
Vision, hearing, and sound streams
Human perception is multi-modal. The brain makes sense of reality by combining information from many senses. In a music performance we hear the sound of an instrument at the direction and distance we see it regardless of our ears.
Improving the lighting improves the clarity of the sound, and the color of a hall changes the perception of sound dramatically. The ticket price nearly always reflects the clarity of vision, not sound, and new halls with spectacular architecture sound wonderful for a while. But while we can hear many
things with our eyes, what we do or don’t hear with our ears affects us profoundly.
In good acoustic spaces if we close our eyes for five min- utes or more and shake our heads a few times we can still per- ceive the pitch, timbre, direction and distance of multiple sound sources at the same time. Somehow our ears and brain stem have managed to separate a jumble of overlapping sounds into separate streams of information, one for each source. This is the well-known cocktail party effect, vital to our survival as a social species. When it is possible to sepa- rate sounds in this way the brain stem sends multiple streams upward to consciousness. When it is not possible we hear the whole ensemble as a single mixed stream of sound, and are unable to localize or identify the timbre of individual voices. For speech the result is babble. Music is more forgiving. We hear pitches, harmony, loudness and dynamics—but infor- mation about who played what, from where, and with what timbre, is lost.
The difference is not subtle. When we can separate voices the higher brain is able to process the sound with far greater attention and interest. Imagine being at a party so crowded that you cannot hear what anyone is saying, and contrast that with the situation where we are able to listen to any one of sev- eral conversations at will. The first situation will cause the brain to tune out, the second demands attention.
Direct sound—The key to localization and timbre
Sound in open air decreases in level by six decibels for each doubling of distance. People sitting close to the source will hear a much louder sound than people sitting further away. One fundamental purpose of a hall is to catch sound that does not go directly to the listeners and re-direct it toward them. This makes the sound louder and more uni- form. In halls very few of the reflections that re-direct the sound are first-order (i.e., have bounced off only one surface before being heard), and the first order reflections are always weaker individually than the sound that travels directly to the listener. But there are a great many reflections of higher order, and they combine chaotically to create what we call reverberation. In a typical seat in a concert hall, the combi- nation of reflections and reverberation contains at least ten times more energy than the direct sound, and most of that energy comes from high order reflections. This creates the
“The ability to separate individual sources from a complex sound field has limits.”
Clarity, Cocktails, and Concerts 15











































































   17   18   19   20   21