Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters










Publication year range
1.
J Speech Lang Hear Res ; 66(9): 3399-3412, 2023 09 13.
Article in English | MEDLINE | ID: mdl-37672785

ABSTRACT

PURPOSE: The aim of this study was to develop and validate a large Korean sentence set with varying degrees of semantic predictability that can be used for testing speech recognition and lexical processing. METHOD: Sentences differing in the degree of final-word predictability (predictable, neutral, and anomalous) were created with words selected to be suitable for both native and nonnative speakers of Korean. Semantic predictability was evaluated through a series of cloze tests in which native (n = 56) and nonnative (n = 19) speakers of Korean participated. This study also used a computer language model to evaluate final-word predictabilities; this is a novel approach that the current study adopted to reduce human effort in validating a large number of sentences, which produced results comparable to those of the cloze tests. In a speech recognition task, the sentences were presented to native (n = 23) and nonnative (n = 21) speakers of Korean in speech-shaped noise at two levels of noise. RESULTS: The results of the speech-in-noise experiment demonstrated that the intelligibility of the sentences was similar to that of related English corpora. That is, intelligibility was significantly different depending on the semantic condition, and the sentences had the right degree of difficulty for assessing intelligibility differences depending on noise levels and language experience. CONCLUSIONS: This corpus (1,021 sentences in total) adds to the target languages available in speech research and will allow researchers to investigate a range of issues in speech perception in Korean. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.24045582.


Subject(s)
Semantics , Speech Perception , Humans , Speech , Language , Republic of Korea
2.
JASA Express Lett ; 3(1): 015202, 2023 01.
Article in English | MEDLINE | ID: mdl-36725541

ABSTRACT

Japanese adults and Spanish-Catalan children received auditory phonetic training for English vowels using a novel paradigm, a version of the common children's card game Concentration. Individuals played a computer-based game in which they turned over pairs of cards to match spoken words, drawn from sets of vowel minimal pairs. The training was effective for adults, improving vowel recognition in a game that did not explicitly require identification. Children likewise improved over time on the memory card game, but not on the present generalisation task. This gamified training method can serve as a platform for examining development and perceptual learning.


Subject(s)
Speech Perception , Humans , Adult , Child , Language , Learning , Phonetics , Generalization, Psychological
3.
J Acoust Soc Am ; 148(1): 253, 2020 07.
Article in English | MEDLINE | ID: mdl-32752786

ABSTRACT

The present study investigated how single-talker and babble maskers affect auditory and lexical processing during native (L1) and non-native (L2) speech recognition. Electroencephalogram (EEG) recordings were made while L1 and L2 (Korean) English speakers listened to sentences in the presence of single-talker and babble maskers that were colocated or spatially separated from the target. The predictability of the sentences was manipulated to measure lexical-semantic processing (N400), and selective auditory processing of the target was assessed using neural tracking measures. The results demonstrate that intelligible single-talker maskers cause listeners to attend more to the semantic content of the targets (i.e., greater context-related N400 changes) than when targets are in babble, and that listeners track the acoustics of the target less accurately with single-talker maskers. L1 and L2 listeners both modulated their processing in this way, although L2 listeners had more difficulty with the materials overall (i.e., lower behavioral accuracy, less context-related N400 variation, more listening effort). The results demonstrate that auditory and lexical processing can be simultaneously assessed within a naturalistic speech listening task, and listeners can adjust lexical processing to more strongly track the meaning of a sentence in order to help ignore competing lexical content.


Subject(s)
Speech Perception , Speech , Electroencephalography , Evoked Potentials , Female , Humans , Language , Male , Perceptual Masking
4.
Behav Res Methods ; 52(2): 561-571, 2020 04.
Article in English | MEDLINE | ID: mdl-31012064

ABSTRACT

Research into non-native (L2) speech perception has increased the need for specialized experimental materials. The Non-Native Speech Recognition (NNSR) sentences are a new large-scale set of speech recognition materials for research with L2 speakers of English at CEFR level B1 (North, Ortega, & Sheehan, 2010) and above. The set comprises 439 triplets of sentences in three related conditions: semantically predictable, neutral, and anomalous. The sentences were created by combining a strongly or weakly contextually constrained sentence frame with a congruent or anomalous final keyword, and they were matched on a number of factors during development, to maintain consistency across conditions. This article describes the development process of the NNSR sentences, along with results of speech-in-noise intelligibility testing for L2 and native English speakers. Suggestions for the sentences' application in a range of investigations and experimental designs are also discussed.


Subject(s)
Speech Perception , Adolescent , Female , Humans , Language , Male , Speech , Young Adult
5.
Sci Rep ; 9(1): 19592, 2019 12 20.
Article in English | MEDLINE | ID: mdl-31862999

ABSTRACT

This study measured infants' neural responses for spectral changes between all pairs of a set of English vowels. In contrast to previous methods that only allow for the assessment of a few phonetic contrasts, we present a new method that allows us to assess changes in spectral sensitivity across the entire vowel space and create two-dimensional perceptual maps of the infants' vowel development. Infants aged four to eleven months were played long series of concatenated vowels, and the neural response to each vowel change was assessed using the Acoustic Change Complex (ACC) from EEG recordings. The results demonstrated that the youngest infants' responses more closely reflected the acoustic differences between the vowel pairs and reflected higher weight to first-formant variation. Older infants had less acoustically driven responses that seemed a result of selective increases in sensitivity for phonetically similar vowels. The results suggest that phonetic development may involve a perceptual warping for confusable vowels rather than uniform learning, as well as an overall increasing sensitivity to higher-frequency acoustic information.


Subject(s)
Phonetics , Speech Acoustics , Speech Perception/physiology , Electroencephalography , Female , Humans , Infant , Language , Learning , Male , Sound Spectrography , Speech Discrimination Tests , Verbal Learning
6.
J Speech Lang Hear Res ; 62(7): 2213-2226, 2019 07 15.
Article in English | MEDLINE | ID: mdl-31251681

ABSTRACT

Purpose The intelligibility of an accent strongly depends on the specific talker-listener pairing. To explore the causes of this phenomenon, we investigated the relationship between acoustic-phonetic similarity and accent intelligibility across native (1st language) and nonnative (2nd language) talker-listener pairings. We also used online measures to observe processing differences in quiet. Method English ( n = 16) and Spanish ( n = 16) listeners heard Standard Southern British English, Glaswegian English, and Spanish-accented English in a speech recognition task (in quiet and noise) and an electroencephalogram task (quiet only) designed to assess phonological and lexical processing. Stimuli were drawn from the nonnative speech recognition sentences ( Stringer & Iverson, 2019 ). The acoustic-phonetic similarity between listeners' accents and the 3 accents was calculated using the ACCDIST metric ( Huckvale, 2004 , 2007 ). Results Talker-listener pairing had a clear influence on accent intelligibility. This was linked to the phonetic similarity of the talkers and the listeners, but similarity could not account for all findings. The influence of talker-listener pairing on lexical processing was less clear; the N400 effect was mostly robust to accent mismatches, with some relationship to intelligibility. Conclusion These findings suggest that the influence of talker-listener pairing on intelligibility may be partly attributable to accent similarity in addition to accent familiarity. Online measures also show that differences in talker-listener accents can disrupt processing in quiet even where accents are highly intelligible.


Subject(s)
Language , Noise , Speech Intelligibility/physiology , Adult , England , Female , Humans , Male , Phonetics , Semantics , Signal-To-Noise Ratio , Spain , Speech Acoustics , Young Adult
7.
Neuropsychologia ; 122: 105-115, 2019 01.
Article in English | MEDLINE | ID: mdl-30414799

ABSTRACT

Dyslexia is characterized by poor reading skills, yet often also difficulties in second-language learning. The differences between native- and second-language speech processing and the establishment of new brain representations for spoken second language in dyslexia are not, however, well understood. We used recordings of the mismatch negativity component of event-related potential to determine possible differences between the activation of long-term memory representations for spoken native- and second-language word forms in Finnish-speaking 9-11-year-old children with or without dyslexia, studying English as their second language in school. In addition, we sought to investigate whether the bottleneck of dyslexic readers' second-language learning lies at the level of word representations or smaller units and whether the amplitude of mismatch negativity is correlated with native-language literacy and related skills. We found that the activation of brain representations for familiar second-language words, but not for second-language speech sounds or native-language words, was weaker in children with dyslexia than in typical readers. Source localization revealed that dyslexia was associated with weak activation of the right temporal cortex, which has been previously linked with word-form learning. Importantly, the amplitude of the mismatch negativity for familiar second-language words correlated with native-language literacy and rapid naming scores, suggesting a close link between second-language processing and these skills.


Subject(s)
Brain/physiopathology , Dyslexia/physiopathology , Literacy , Multilingualism , Reading , Speech Perception/physiology , Child , Dyslexia/psychology , Electroencephalography , Evoked Potentials , Female , Humans , Learning/physiology , Male , Phonetics , Sound Spectrography
8.
Cognition ; 179: 163-170, 2018 10.
Article in English | MEDLINE | ID: mdl-29957515

ABSTRACT

Speech communication in a non-native language (L2) can feel effortful, and the present study suggests that this effort affects both auditory and lexical processing. EEG recordings (electroencephalography) were made from native English (L1) and Korean listeners while they listened to English sentences spoken with two accents (English and Korean) in the presence of a distracting talker. Neural entrainment (i.e., phase locking between the EEG recording and the speech amplitude envelope) was measured for target and distractor talkers. L2 listeners had relatively greater entrainment for target talkers than did L1 listeners, likely because their difficulty with L2 speech recognition caused them to focus more attention on the speech signal. N400 was measured for the final word in each sentence, and L2 listeners had greater lexical processing in high-predictability sentences than did L1 listeners. L1 listeners had greater target-talker entrainment when listening to the more difficult L2 accent than their own L1 accent, and similarly had larger N400 responses for the L2 accent. It thus appears that the increased effort of L2 listeners, as well as L1 listeners understanding L2 speech, modulates their auditory and lexical processing during speech recognition. This may provide a mechanism to compensate for their perceptual challenges under adverse conditions.


Subject(s)
Brain/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials , Evoked Potentials, Auditory , Female , Humans , Linguistics , Male , Multilingualism , Phonetics , Recognition, Psychology/physiology , Young Adult
9.
J Acoust Soc Am ; 139(4): 1799, 2016 04.
Article in English | MEDLINE | ID: mdl-27106328

ABSTRACT

Cross-language differences in speech perception have traditionally been linked to phonological categories, but it has become increasingly clear that language experience has effects beginning at early stages of perception, which blurs the accepted distinctions between general and speech-specific processing. The present experiments explored this distinction by playing stimuli to English and Japanese speakers that manipulated the acoustic form of English /r/ and /l/, in order to determine how acoustically natural and phonologically identifiable a stimulus must be for cross-language discrimination differences to emerge. Discrimination differences were found for stimuli that did not sound subjectively like speech or /r/ and /l/, but overall they were strongly linked to phonological categorization. The results thus support the view that phonological categories are an important source of cross-language differences, but also show that these differences can extend to stimuli that do not clearly sound like speech.


Subject(s)
Discrimination, Psychological , Phonetics , Speech Acoustics , Speech Perception , Acoustic Stimulation , Acoustics , Adolescent , Adult , Audiometry, Speech , Humans , Middle Aged , Sound Spectrography , Young Adult
10.
Q J Exp Psychol (Hove) ; 68(10): 2022-40, 2015.
Article in English | MEDLINE | ID: mdl-25607721

ABSTRACT

Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such "quantized" views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.


Subject(s)
Memory, Short-Term/physiology , Sound , Speech Perception/physiology , Acoustic Stimulation , Adult , Algorithms , Female , Humans , Male , Mental Recall/physiology , Models, Psychological , Phonetics
11.
J Neurophysiol ; 112(4): 792-801, 2014 Aug 15.
Article in English | MEDLINE | ID: mdl-24805076

ABSTRACT

Research on mammals predicts that the anterior striatum is a central component of human motor learning. However, because vocalizations in most mammals are innate, much of the neurobiology of human vocal learning has been inferred from studies on songbirds. Essential for song learning is a pathway, the homolog of mammalian cortical-basal ganglia "loops," which includes the avian striatum. The present functional magnetic resonance imaging (fMRI) study investigated adult human vocal learning, a skill that persists throughout life, albeit imperfectly given that late-acquired languages are spoken with an accent. Monolingual adult participants were scanned while repeating novel non-native words. After training on the pronunciation of half the words for 1 wk, participants underwent a second scan. During scanning there was no external feedback on performance. Activity declined sharply in left and right anterior striatum, both within and between scanning sessions, and this change was independent of training and performance. This indicates that adult speakers rapidly adapt to the novel articulatory movements, possibly by using motor sequences from their native speech to approximate those required for the novel speech sounds. Improved accuracy correlated only with activity in motor-sensory perisylvian cortex. We propose that future studies on vocal learning, using different behavioral and pharmacological manipulations, will provide insights into adult striatal plasticity and its potential for modification in both educational and clinical contexts.


Subject(s)
Corpus Striatum/physiology , Language , Learning/physiology , Adult , Brain Mapping , Feedback, Physiological , Female , Humans , Linguistics , Magnetic Resonance Imaging , Male
12.
Hum Brain Mapp ; 35(5): 1930-43, 2014 May.
Article in English | MEDLINE | ID: mdl-23723184

ABSTRACT

Modern neuroimaging techniques have advanced our understanding of the distributed anatomy of speech production, beyond that inferred from clinico-pathological correlations. However, much remains unknown about functional interactions between anatomically distinct components of this speech production network. One reason for this is the need to separate spatially overlapping neural signals supporting diverse cortical functions. We took three separate human functional magnetic resonance imaging (fMRI) datasets (two speech production, one "rest"). In each we decomposed the neural activity within the left posterior perisylvian speech region into discrete components. This decomposition robustly identified two overlapping spatio-temporal components, one centered on the left posterior superior temporal gyrus (pSTG), the other on the adjacent ventral anterior parietal lobe (vAPL). The pSTG was functionally connected with bilateral superior temporal and inferior frontal regions, whereas the vAPL was connected with other parietal regions, lateral and medial. Surprisingly, the components displayed spatial anti-correlation, in which the negative functional connectivity of each component overlapped with the other component's positive functional connectivity, suggesting that these two systems operate separately and possibly in competition. The speech tasks reliably modulated activity in both pSTG and vAPL suggesting they are involved in speech production, but their activity patterns dissociate in response to different speech demands. These components were also identified in subjects at "rest" and not engaged in overt speech production. These findings indicate that the neural architecture underlying speech production involves parallel distinct components that converge within posterior peri-sylvian cortex, explaining, in part, why this region is so important for speech production.


Subject(s)
Brain Mapping , Parietal Lobe/physiology , Speech/physiology , Acoustic Stimulation , Adult , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted , Language , Magnetic Resonance Imaging , Male , Middle Aged , Oxygen/blood , Parietal Lobe/blood supply , Young Adult
13.
Brain ; 136(Pt 6): 1901-12, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23715097

ABSTRACT

In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.


Subject(s)
Acoustic Stimulation/methods , Aphasia/physiopathology , Auditory Cortex/physiology , Auditory Perception/physiology , Functional Laterality/physiology , Stroke/physiopathology , Adult , Aged , Aged, 80 and over , Aphasia/epidemiology , Female , Humans , Magnetoencephalography/methods , Male , Middle Aged , Stroke/epidemiology
14.
Brain Res ; 1470: 52-8, 2012 Aug 27.
Article in English | MEDLINE | ID: mdl-22771705

ABSTRACT

The finding that hyperarticulation of vowel sounds occurs in certain speech registers (e.g., infant- and foreigner-directed speech) suggests that hyperarticulation may have a didactic function in facilitating acquisition of new phonetic categories in language learners. This event-related potential study tested whether hyperarticulation of vowels elicits larger phonetic change responses, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP) and tested native and non-native speakers of English. Data from 11 native English-speaking and 10 native Greek-speaking participants showed that Greek speakers in general had smaller MMNs compared to English speakers, confirming previous studies demonstrating sensitivity of the MMN to language background. In terms of the effect of hyperarticulation, hyperarticulated stimuli elicited larger MMNs for both language groups, suggesting vowel space expansion does elicit larger pre-attentive phonetic change responses. Interestingly Greek native speakers showed some P3a activity that was not present in the English native speakers, raising the possibility that additional attentional switch mechanisms are activated in non-native speakers compared to native speakers. These results give general support for models of speech learning such as Kuhl's Native Language Magnet enhanced (NLM-e) theory.


Subject(s)
Brain/physiology , Contingent Negative Variation/physiology , Evoked Potentials, Auditory/physiology , Phonetics , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Electroencephalography , Female , Humans , Male , Multilingualism , Psychoacoustics , Young Adult
15.
J Acoust Soc Am ; 130(5): EL297-303, 2011 Nov.
Article in English | MEDLINE | ID: mdl-22088031

ABSTRACT

This study examined the perceptual specialization for native-language speech sounds, by comparing native Hindi and English speakers in their perception of a graded set of English /w/-/v/ stimuli that varied in similarity to natural speech. The results demonstrated that language experience does not affect general auditory processes for these types of sounds; there were strong cross-language differences for speech stimuli, and none for stimuli that were nonspeech. However, the cross-language differences extended into a gray area of speech-like stimuli that were difficult to classify, suggesting that the specialization occurred in phonetic processing prior to categorization.


Subject(s)
Multilingualism , Phonetics , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Audiometry, Speech , Cues , Humans , Young Adult
16.
J Acoust Soc Am ; 130(3): 1653-62, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21895102

ABSTRACT

Previous work has shown that the intelligibility of speech in noise is degraded if the speaker and listener differ in accent, in particular when there is a disparity between native (L1) and nonnative (L2) accents. This study investigated how this talker-listener interaction is modulated by L2 experience and accent similarity. L1 Southern British English, L1 French listeners with varying L2 English experience, and French-English bilinguals were tested on the recognition of English sentences mixed in speech-shaped noise that was spoken with a range of accents (French, Korean, Northern Irish, and Southern British English). The results demonstrated clear interactions of accent and experience, with the least experienced French speakers being most accurate with French-accented English, but more experienced listeners being most accurate with L1 Southern British English accents. An acoustic similarity metric was applied to the speech productions of the talkers and the listeners, and significant correlations were obtained between accent similarity and sentence intelligibility for pairs of individuals. Overall, the results suggest that L2 experience affects talker-listener accent interactions, altering both the intelligibility of different accents and the selectivity of accent processing.


Subject(s)
Multilingualism , Noise/adverse effects , Perceptual Masking , Recognition, Psychology , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Audiometry, Speech , Female , Humans , Male , Middle Aged , Young Adult
17.
J Acoust Soc Am ; 128(3): 1357-65, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20815470

ABSTRACT

Previous work has shown that accents affect speech recognition accuracy in noise, with intelligibility being modulated by the similarity between the talkers' and listeners' accents, particularly in the case where they have different L1s. The present study examined the contribution of prosody to recognizing native (L1) and non-native (L2) speech in noise, and how this is affected by the listener's L2 experience. A group of monolingual English listeners and two groups of French listeners with varying L2 English experience were presented with English sentences produced by L1 and L2 (French) speakers. The stimuli were digitally processed to exchange the pitch and segment durations between recordings of the same sentences produced by different speakers (e.g., imposing a French-accented prosody onto recordings made from English speakers). The results revealed that English listeners were more accurate at recognizing L1 English with English prosody, the French inexperienced listeners were more accurate at recognizing French-accented speech with French prosody, and the French experienced listeners varied in the cues that they used depending on the noise level, showing more flexibility of processing. The use of prosodic cues in noise thus appears to be modulated by language experience and varies according to listening context.


Subject(s)
Multilingualism , Noise , Perceptual Masking , Recognition, Psychology , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Audiometry, Speech , Cues , Female , Humans , Male , Pitch Perception , Time Factors , Young Adult
18.
J Am Pharm Assoc (2003) ; 50(3): 379-83, 2010.
Article in English | MEDLINE | ID: mdl-20452912

ABSTRACT

OBJECTIVE: To evaluate expanded pharmacy services designed to improve medication therapy management for hospice care in rural Minnesota. METHODS: Deidentified data were obtained from records kept by the study pharmacy as part of its normal operations. In-depth interviews of key pharmacy personnel and from each hospice care organization were conducted to identify overall themes based on their experiences. Descriptive analysis was conducted for summarizing the findings. Information gleaned from the interviews was documented and themes identified. These themes were used to provide insight for those who may wish to adopt this program for their patient populations. RESULTS: At initial enrollment into hospice care, 85% of the patients received at least one recommendation related to their medication therapy. During patients' enrollment in hospice care, the most common types of problems addressed through pharmacist consults were symptom control (65%), followed by dosage form (15%), medication management (12%), and adverse effect control (8%). CONCLUSION: Implementation and evaluation of this program showed that the structures and processes used were sound and could be transferred to other patient populations. Outcomes from the program were favorable from practitioner, organization, and patient care perspectives.


Subject(s)
Cooperative Behavior , Hospice Care/organization & administration , Medication Therapy Management/organization & administration , Rural Health Services/organization & administration , Drug Utilization , Humans , Minnesota , Patient Care Team/organization & administration
19.
J Cogn Neurosci ; 22(6): 1319-32, 2010 Jun.
Article in English | MEDLINE | ID: mdl-19445609

ABSTRACT

Foreign-language learning is a prime example of a task that entails perceptual learning. The correct comprehension of foreign-language speech requires the correct recognition of speech sounds. The most difficult speech-sound contrasts for foreign-language learners often are the ones that have multiple phonetic cues, especially if the cues are weighted differently in the foreign and native languages. The present study aimed to determine whether non-native-like cue weighting could be changed by using phonetic training. Before the training, we compared the use of spectral and duration cues of English /i/ and /I/ vowels (e.g., beat vs. bit) between native Finnish and English speakers. In Finnish, duration is used phonologically to separate short and long phonemes, and therefore Finns were expected to weight duration cues more than native English speakers. The cross-linguistic differences and training effects were investigated with behavioral and electrophysiological methods, in particular by measuring the MMN brain response that has been used to probe long-term memory representations for speech sounds. The behavioral results suggested that before the training, the Finns indeed relied more on duration in vowel recognition than the native English speakers did. After the training, however, the Finns were able to use the spectral cues of the vowels more reliably than before. Accordingly, the MMN brain responses revealed that the training had enhanced the Finns' ability to preattentively process the spectral cues of the English vowels. This suggests that as a result of training, plastic changes had occurred in the weighting of phonetic cues at early processing stages in the cortex.


Subject(s)
Cerebral Cortex/physiology , Language , Learning/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Adolescent , Adult , Analysis of Variance , Brain Mapping , Cues , Electroencephalography , Female , Humans , Male , Multilingualism , Signal Processing, Computer-Assisted
20.
J Acoust Soc Am ; 126(2): 866-77, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19640051

ABSTRACT

This study investigated whether individuals with small and large native-language (L1) vowel inventories learn second-language (L2) vowel systems differently, in order to better understand how L1 categories interfere with new vowel learning. Listener groups whose L1 was Spanish (5 vowels) or German (18 vowels) were given five sessions of high-variability auditory training for English vowels, after having been matched to assess their pre-test English vowel identification accuracy. Listeners were tested before and after training in terms of their identification accuracy for English vowels, the assimilation of these vowels into their L1 vowel categories, and their best exemplars for English (i.e., perceptual vowel space map). The results demonstrated that Germans improved more than Spanish speakers, despite the Germans' more crowded L1 vowel space. A subsequent experiment demonstrated that Spanish listeners were able to improve as much as the German group after an additional ten sessions of training, and that both groups were able to retain this learning. The findings suggest that a larger vowel category inventory may facilitate new learning, and support a hypothesis that auditory training improves identification by making the application of existing categories to L2 phonemes more automatic and efficient.


Subject(s)
Language , Learning , Multilingualism , Phonetics , Adult , Analysis of Variance , Cluster Analysis , Humans , Language Tests , Recognition, Psychology , Retention, Psychology , Speech , Speech Perception , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...