Page 34 - WINTER2019
P. 34
Music and the Brain
(often with secondary accents later on for meters containing four or more beats). This tendency in humans to accentuate metrical beats is so strong that EEG recordings have shown an enhancement of the beat frequency associated with the
“first” beat of an imagined meter, even with an unaccented isochronous beat stimulus (Nozaradan et al., 2011).
Harmony and Melody
Although melodies are pitches presented sequentially over time, harmony refers to the simultaneous presentation of pitches over time. Music theorists and scientists alike have attempted to define a space that represents how we conceptualize tonal harmony. Insight came from human subjective ratings from the probe-tone experiment, in which a melodic context is pre- sented and followed by a tone and the subjects’ task is to rate how well the final tone fit the preceding melody. These probe- tone profiles matched the relative importance of pitches within a tonal context as dictated by the principles of music theory. Applying dimensionality-reduction algorithms on these probe- tone data yield the empirically derived tonal space. Based on mathematical modeling of the empirical data, Krumhansl and Kessler (1982) found that the best geometric solution of tonal space is in the shape of a torus (Figure 3). The toroidal repre- sentation is effective at capturing the close relative distances between neighboring keys within the circle of fifths from music theory. It also captures the further distance between parallel minors than between relative minors, as observed in empirical ratings data from the probe-tone paradigm.
Figure 3. The torus is a good approximation of our mental representation of western music. Left: two-dimensional tonal space. Uppercase letters, major keys; lowercase letters, minor keys. A chord progression in A major, shown in this example, elicits activity near the A major region while suppressing activity near its dissimilar keys such as E- flat major and D-sharp minor. Right: three-dimensional tonal space in the shape of a torus. This can be derived by wrapping the two- dimensional tonal space in the left-to-right direction (black arrows) and then wrapping the resulting tube again in a circular direction (orange arrow).
Given that the torus describes our mental representation of Western tonal music, it was possible to compose continuous modulating melodies and harmonies that smoothly navigate the surface of the torus. Janata et al. (2002) traced brain activ- ity in a functional MRI study as participants listened to these continuously modulating melodies. Results showed that a region of the frontal lobe, the ventromedial prefrontal cortex (vmPFC), was consistently responsive to modulating melo- dies; crucially, contiguous voxels in the vmPFC were active
as the melody changed keys to contiguous parts of the torus, suggesting that the brain was tracking tonal movement in these regions.
Although the continuous perception of harmony is impor- tant, the violation of harmonic expectation has also lent insight into how the brain processes harmony. When pre- sented with unexpected chords within a chord progression, ERP studies have shown an early right anterior negativity (ERAN) in participants, which is a negative waveform peak- ing at approximately 200 ms after the onset of the unexpected chord in the right frontal portion of the brain (Koelsch et al., 2000). The ERAN indexes our expectation that music follows a known syntactic structure. Interestingly, even new music that we learn rapidly within the course of an hour can elicit the ERAN, suggesting that the neural generators of the ERAN can flexibly and rapidly learn to integrate new sounds and sound patterns given their statistical context in the environ- ment (Loui et al., 2009).
The brain structures that generate the ERAN are in the left inferior frontal gyrus (IFG), which is sensitive to linguis- tic syntax (Levitin and Menon, 2003). Patients with IFG lesions show behavioral deficits in processing musical struc- ture that are coupled with a diminished or altered ERAN (Sammler et al., 2011). These results suggest that areas of the brain that used to be thought of as language-specific regions, such as the left IFG, are in fact processing syn- tactic structure in music as well. This lends credence to
the idea that music and language processing interact in the brain specifically for the processing of syntactic structure, as articulated by Patel’s (2010) Shared Syntactic Integration Resource Hypothesis (SSIRH).
Prediction and Reward
The musical features reviewed above are acoustic devices that ultimately provide the groundwork for us to make predic- tions about events in the immediate future. We expect sounds to occur on or at even subdivisions of the beat and accents to
34 | Acoustics Today | Winter 2019