Page 16 - Summer 2018
P. 16

Foreign Accent
does not. Different languages allow different sequences.
Some languages, such as Japanese, are extraordinarily re-
strictive, traditionally allowing only words with a single con-
sonant at the beginning (with some consonants requiring an
additional vowel-like articulation such as ‘ky’ /kj/) and no
consonants except, possibly, a nasal segment such as ‘m’ or
‘n’ at the end. This differs from English, which also allows
consonants to combine into some fairly complex clusters,
both at the beginning, such as in the word “spline” beginning
with /spl/, and at the end, as in the word “sixths” ending with
/ksθs/. Actually though, even with these allowances, English
is not all that extreme in the proliferation of consonant clus-
ters. Languages from around the globe, such as Georgian (a
language of the Caucasus Mountains), Berber (a language of
and vowel identities correct, the addition of perceived vow- els can push a listener to add various endings on the word to account for the additional vowel or to add short function words such as “the” or “in.” This, in turn, can often force the interpretation of the misperceived word as in the wrong grammatical category, for example, hearing a noun as a verb, which will then force a completely different reading of the overall sentence to make sense of the location of the verb where a noun was intended.
Applying the Criteria:
What People Do with Accented Speech All of this sensitivity to variance in the language by listen- ers is generally bad news if you are trying to communicate as a native speaker. There is, however, some good news in the SLA research on accent perception for learners. Various studies also show that although listeners often pick up on the deviation from the native patterns, listeners can, and often do, accommodate to someone who they think is a nonnative speaker.
An interesting demonstration of this point was found in a study by Munro and Derwing (2001). Munro and Derwing were interested in understanding a very pervasive overall timing difference between nonnative and native speech. Na- tive speech is typically spoken faster than nonnative speech. To study this, they developed a technique of digitally com- pressing or stretching various portions of speech recordings to produce artificially faster or slower speech. They then used the artificially manipulated the speech to determine how people react to the fact that nonnative speakers are typi- cally slower than native speakers.
They presented the speech to listeners and asked them to eval- uate the modified speech for nativeness. What they found is that not only are nonnative speakers slower in speed but also that native listeners then take the slower speech and evalu- ate it as less native than faster speech. So this is similar to the studies just reviewed already; all aspects of nonnative speech, apparently, can cause listeners to identify it as nonnative, even something as global as the overall speed of production.
However, there is a twist in Munro and Derwing’s (2001) results. They also found that speech that is unnaturally fast was also evaluated as less native, so listeners had a preferred speed of production somewhere in the middle. From these data, then, they could estimate an optimal rate of speed for making the speech most native-like, and this estimate had an interesting property. While being faster than the natural
North Africa), and Nuxálk (a language of the North Ameri- can Pacific Northwest) regularly exhibit much larger conso- nant sequences, allowing extraordinarily long sequences of 6, e.g., Georgian /pr͡tskvna/ (meaning “peeling”), or even more consonants, such as the͡ commonly cited 12-segment Nuxálk word /xɬpʼχwɬthɬphɬːskwhtsʼ/ (meaning “He had had in his pos- session a bunchberry plant”), which is made up entirely of consonants.͡
As can be readily/ixmɬpaʼgχiwnɬetdhɬ,pthhɬːesskewshetsqʼu/encing differences cre-
ate many problems for speakers of a language like Japanese learning to speak a language such as English (or an English speaker learning Nuxálk!). Not only do the various speech ac- tions have to become coordinated with one another but also the combinations of actions themselves begin to take on as- pects of the productions that characterize the combination as a whole. These sequencing difficulties are well researched. Excellent examples of this come from a wide range of studies on the particular problems created for learners whose native language is very restrictive of consonants following a vowel, such as Japanese, Korean, and Italian (a fact reflected in the fact that the word “Italian” could not appear in Italian without a final o).
In these cases, it is common for speakers of various languages to initially simply fail to produce a consonant with enough of an acoustic signature for listeners to hear it. Speakers with more experience tend to go in another direction, noticed by Tajima et al. (1997), of overarticulating the consonant, per- haps to ensure its being heard. The outcome of doing this is the perception by native listeners of an additional vowel following the consonant, as was found pervasively in Tajima et al.’s study. As suggested in Tajima et al.’s study, these sorts of overproductions can actually be particularly harmful for intelligibility. Although one might get most of the consonant
14 | Acoustics Today | Summer 2018

   14   15   16   17   18