Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 58
Filter
Add more filters










Publication year range
1.
J Acoust Soc Am ; 107(5 Pt 1): 2697-703, 2000 May.
Article in English | MEDLINE | ID: mdl-10830391

ABSTRACT

This study examined neurophysiologic correlates of the perception of native and nonnative phonetic categories. Behavioral and electrophysiologic responses were obtained from Hindi and English listeners in response to a stimulus continuum of naturally produced, bilabial CV stimuli that differed in VOT from -90 to 0 ms. These speech sounds constitute phonemically relevant categories in Hindi but not in English. As expected, the native Hindi listeners identified the stimuli as belonging to two distinct phonetic categories (/ba/ and /pa/) and were easily able to discriminate a stimulus pair across these categories. On the other hand, English listeners discriminated the same stimulus pair at a chance level. In the electrophysiologic experiment N1 and MMN cortical evoked potentials (considered neurophysiologic indices of stimulus processing) were measured. The changes in N1 latency which reflected the duration of pre-voicing across the stimulus continuum were not significantly different for Hindi and English listeners. On the other hand, in response to the /ba/-/pa/ stimulus contrast, a robust MMN was seen only in Hindi listeners and not in English listeners. These results suggest that neurophysiologic levels of stimulus processing reflected by the MMN and N1 are differentially altered by linguistic experience.


Subject(s)
Cerebral Cortex/physiology , Evoked Potentials/physiology , Language , Speech Perception/physiology , Cross-Cultural Comparison , Humans , Phonetics , Speech Discrimination Tests
3.
Ann Otol Rhinol Laryngol Suppl ; 185: 67-8, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11141010

ABSTRACT

To assess whether more channels are needed to understand speech in noise than in quiet, we processed speech in a manner similar to that of spectral peak-like cochlear implant processors and presented it at a +2-dB signal-to-noise ratio to normal-hearing listeners for identification. The number of analysis filters varied from 8 to 16, and the number of maximum channel amplitudes selected in each cycle varied from 2 to 16. The results show that more channels are needed to understand speech in noise than in quiet, and that high levels of speech understanding can be achieved with 12 channels. Selecting more than 12 channel amplitudes out of 16 channels did not yield significant improvements in recognition performance.


Subject(s)
Cochlear Implants , Hearing/physiology , Speech Perception , Adult , Equipment Design , Humans , Middle Aged , Noise , Signal Processing, Computer-Assisted
4.
Ear Hear ; 21(6): 590-6, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11132785

ABSTRACT

OBJECTIVE: The aims of this study were 1) to determine the number of channels of stimulation needed by normal-hearing adults and children to achieve a high level of word recognition and 2) to compare the performance of normal-hearing children and adults listening to speech processed into 6 to 20 channels of stimulation with the performance of children who use the Nucleus 22 cochlear implant. DESIGN: In Experiment 1, the words from the Multisyllabic Lexical Neighborhood Test (MLNT) were processed into 6 to 20 channels and output as the sum of sine waves at the center frequency of the analysis bands. The signals were presented to normal-hearing adults and children for identification. In Experiment 2, the wideband recordings of the MLNT words were presented to early-implanted and late-implanted children who used the Nucleus 22 cochlear implant. RESULTS: Experiment 1: Normal-hearing children needed more channels of stimulation than adults to recognize words. Ten channels allowed 99% correct word recognition for adults; 12 channels allowed 92% correct word recognition for children. Experiment 2: The average level of intelligibility for both early- and late-implanted children was equivalent to that found for normal-hearing adults listening to four to six channels of stimulation. The best intelligibility for implanted children was equivalent to that found for normal-hearing adults listening to six channels of stimulation. The distribution of scores for early- and late-implanted children differed. Nineteen percent of the late-implanted children achieved scores below that allowed by a 6-channel processor. None of the early-implanted children fell into this category. CONCLUSIONS: The average implanted child must deal with a signal that is significantly degraded. This is likely to prolong the period of language acquisition. The period could be significantly shortened if implants were able to deliver at least eight functional channels of stimulation. Twelve functional channels of stimulation would provide signals near the intelligibility of wideband signals in quiet.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Speech Perception/physiology , Adult , Child, Preschool , Deafness/physiopathology , Equipment Design , Humans , Speech Intelligibility
5.
J Acoust Soc Am ; 108(6): 3030-5, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11144595

ABSTRACT

Auditory evoked potential (AEP) correlates of the neural representation of stimuli along a /ga/-/ka/ and a /ba/-/pa/ continuum were examined to determine whether the voice-onset time (VOT)-related change in the N1 onset response from a single to double-peaked component is a reliable indicator of the perception of voiced and voiceless sounds. Behavioral identification results from ten subjects revealed a mean category boundary at a VOT of 46 ms for the /ga/-/ka/ continuum and at a VOT of 27.5 ms for the /ba/-/pa/ continuum. In the same subjects, electrophysiologic recordings revealed that a single N1 component was seen for stimuli with VOTs of 30 ms and less, and two components (N1' and N1) were seen for stimuli with VOTs of 40 ms and more for both continua. That is, the change in N1 morphology (from single to double-peaked) coincided with the change in perception from voiced to voiceless for stimuli from the /ba/-/pa/ continuum, but not for stimuli from the /ga/-/ka/ continuum. The results of this study show that N1 morphology does not reliably predict phonetic identification of stimuli varying in VOT. These findings also suggest that the previously reported appearance of a "double-peak" onset response in aggregate recordings from the auditory cortex does not indicate a cortical correlate of the perception of voicelessness.


Subject(s)
Attention/physiology , Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Phonetics , Speech Acoustics , Adult , Electroencephalography , Female , Humans , Male , Sound Spectrography
6.
J Speech Lang Hear Res ; 43(4): 989-96, 2000 Aug.
Article in English | MEDLINE | ID: mdl-11386484

ABSTRACT

Listeners judged the dissimilarity of pairs of synthesized nasal voices that varied on 3 dimensions. Separate nonmetric multidimensional scaling (MDS) solutions were calculated for each listener and the group. Similar 3-dimensional solutions were derived for the group and each of the listeners, with the group MDS solution accounting for 83% of the total variance in listeners' judgments. Dimension 1 ("Nasality") accounted for 54% of the variance, Dimension 2 ("Loudness") for 18% of the variance, and Dimension 3 ("Pitch") for 11% of the variance. The 3 dimensions were significantly and positively correlated with objective measures of nasalization, intensity, and fundamental frequency. The results of this experiment are discussed in relation to other MDS studies of voice perception, and there is a discussion of methodological issues for future research.


Subject(s)
Voice Quality , Humans , Judgment , Phonetics , Pilot Projects , Random Allocation , Reproducibility of Results , Speech Perception , Speech, Alaryngeal
7.
J Acoust Soc Am ; 106(2): 1078-83, 1999 Aug.
Article in English | MEDLINE | ID: mdl-10462812

ABSTRACT

The goal of this study was to examine the neural encoding of voice-onset time distinctions that indicate the phonetic categories /da/ and /ta/ for human listeners. Cortical Auditory Evoked Potentials (CAEP) were measured in conjunction with behavioral perception of a /da/-/ta/ continuum. Sixteen subjects participated in identification and discrimination experiments. A sharp category boundary was revealed between /da/ and /ta/ around the same location for all listeners. Subjects' discrimination of a VOT change of equal magnitude was significantly more accurate across the /da/-/ta/ categories than within the /ta/ category. Neurophysiologic correlates of VOT encoding were investigated using the N1 CAEP which reflects sensory encoding of stimulus features and the MMN CAEP which reflects sensory discrimination. The MMN elicited by the across-category pair was larger and more robust than the MMN which occurred in response to the within-category pair. Distinct changes in N1 morphology were related to VOT encoding. For stimuli that were behaviorally identified as /da/, a single negativity (N1) was apparent; however, for stimuli identified as /ta/, two distinct negativities (N1 and N1') were apparent. Thus the enhanced MMN responses and the morphological discontinuity in N1 morphology observed in the region of the /da/-/ta/ phonetic boundary appear to provide neurophysiologic correlates of categorical perception for VOT.


Subject(s)
Cerebral Cortex/physiology , Evoked Potentials, Auditory/physiology , Speech Perception/physiology , Voice/physiology , Adult , Electrophysiology , Female , Humans , Male , Phonetics , Time Factors
8.
J Speech Lang Hear Res ; 42(1): 42-55, 1999 Feb.
Article in English | MEDLINE | ID: mdl-10025542

ABSTRACT

Several authors have evaluated consonant-to-vowel ratio (CVR) enhancement as a means to improve speech recognition in listeners with hearing impairment, with the intention of incorporating this approach into emerging amplification technology. Unfortunately, most previous studies have enhanced CVRs by increasing consonant energy, thus possibly confounding CVR effects with consonant audibility. In this study, we held consonant audibility constant by reducing vowel transition and steady-state energy rather than increasing consonant energy. Performance-by-intensity (PI) functions were obtained for recognition of voiceless stop consonants (/p/, /t/, /k/) presented in isolation (burst and aspiration digitally separated from the vowel) and for consonant-vowel syllables, with readdition of the vowel /a/. There were three CVR conditions: normal CVR, vowel reduction by 6 dB, and vowel reduction by 12 dB. Testing was conducted in broadband noise fixed at 70 dB SPL and at 85 dB SPL. Six adults with sensorineural hearing impairment and 2 adults with normal hearing served as listeners. Results indicated that CVR enhancement did not improve identification performance when consonant audibility was held constant, except at the higher noise level for one listener with hearing impairment. The re-addition of the vowel energy to the isolated consonant did, however, produce large and significant improvements in phoneme identification.


Subject(s)
Hearing Loss, Sensorineural , Speech Perception/physiology , Adult , Female , Hearing Aids , Hearing Loss, Sensorineural/therapy , Humans , Male , Middle Aged , Phonetics , Sound Spectrography , Time Factors
9.
J Acoust Soc Am ; 104(6): 3583-5, 1998 Dec.
Article in English | MEDLINE | ID: mdl-9857516

ABSTRACT

Sentences were processed through simulations of cochlear-implant signal processors with 6, 8, 12, 16, and 20 channels and were presented to normal-hearing listeners at +2 db S/N and at -2 db S/N. The signal-processing operations included bandpass filtering, rectification, and smoothing of the signal in each band, estimation of the rms energy of the signal in each band (computed every 4 ms), and generation of sinusoids with frequencies equal to the center frequencies of the bands and amplitudes equal to the rms levels in each band. The sinusoids were summed and presented to listeners for identification. At issue was the number of channels necessary to reach maximum performance on tests of sentence understanding. At +2 dB S/N, the performance maximum was reached with 12 channels of stimulation. At -2 dB S/N, the performance maximum was reached with 20 channels of stimulation. These results, in combination with the outcome that in quiet, asymptotic performance is reached with five channels of stimulation, demonstrate that more channels are needed in noise than in quiet to reach a high level of sentence understanding and that, as the S/N becomes poorer, more channels are needed to achieve a given level of performance.


Subject(s)
Cochlear Implants , Hearing/physiology , Noise/adverse effects , Speech Perception/physiology , Acoustic Stimulation/instrumentation , Adult , Equipment Design , Humans , Middle Aged
10.
Ear Hear ; 19(6): 481-4, 1998 Dec.
Article in English | MEDLINE | ID: mdl-9867296

ABSTRACT

OBJECTIVE: To compare the recognition of vowels and sentences in noise by cochlear implant patients using a 6-channel, continuous interleaved sampling (CIS) processor and by normal-hearing subjects listening to speech processed in the manner of the implant processor and output as six amplitude-modulated sine waves. DESIGN: Subjects, 11 normal-hearing listeners and 7 cochlear implant patients, were presented natural vowels produced by men, women, and girls in /hVd/ context and sentences from the Hearing In Noise Test (HINT) lists at +15, +10, and +5 dB signal to noise ratio (SNR) for identification. Stimuli for the normal-hearing subjects were preprocessed through a simulation of a 6-channel implant processor and were output as the sum of sinusoids at the center frequencies of the analysis filters. RESULTS: For the multitalker vowels, four of the seven patients achieved scores within +/-1 standard deviation of the mean for normal-hearing listeners at +15 and +10 dB SNR. At the +5 dB SNR three patients achieved scores within +/-1 standard deviation of the mean for the normal-hearing listeners. For the HINT sentences, four of seven patients achieved scores within +/-1 standard deviation of the mean for the normal-hearing listeners at +15 dB and at +10 dB SNR and two achieved scores within that range at +5 dB SNR. CONCLUSION: Our results extend the range of stimulus conditions, from quiet to modest amounts of noise, in which the CIS strategy allows the best performing patients to extract most, if not all, of the information available to normal-hearing subjects listening to speech processed into six channels.


Subject(s)
Cochlear Implantation , Deafness/surgery , Hearing/physiology , Noise/adverse effects , Speech Perception/physiology , Adult , Aged , Female , Humans , Infant , Male
11.
J Acoust Soc Am ; 104(1): 511-7, 1998 Jul.
Article in English | MEDLINE | ID: mdl-9670542

ABSTRACT

The goals of this study were (i) to assess the replicability of the "perceptual magnet effect" [Iverson and Kuhl, J. Acoust. Soc. Am. 97(1), 553-561 (1995)] and (ii) to investigate neurophysiologic processes underlying the perceptual magnet effect by using the mismatch negativity (MMN) auditory evoked potential. A stimulus continuum from /i/ to /e/ was synthesized by varying F1 and F2 in equal mel steps. Ten adult subjects identified and rated the goodness of the stimuli. Results revealed that the prototype was the stimulus with the lowest F1 and highest F2 values and the nonprototype stimulus was close to the category boundary. Subjects discriminated stimulus pairs differing in equal mel steps. The results indicated that discrimination accuracy was not significantly different in the prototype and the nonprototype condition. That is, no perceptual magnet effect was observed. The MMN evoked potential (a preattentive, neurophysiologic index of auditory discrimination) revealed that despite equal mel differences between the stimulus pairs the MMN was largest for the prototype pair (i.e., the pair that had the lowest F1 and highest F2 values). Therefore the MMN appears to be sensitive to within category acoustic differences. Taken together, the behavioral and electrophysiologic results indicate that discrimination of stimulus pairs near a prototype is based on the auditory structure of the stimulus pairs.


Subject(s)
Evoked Potentials, Auditory , Speech Perception/physiology , Speech/physiology , Adult , Female , Humans , Male , Phonetics , Speech Acoustics , Time Factors
12.
Ear Hear ; 19(2): 162-6, 1998 Apr.
Article in English | MEDLINE | ID: mdl-9562538

ABSTRACT

OBJECTIVE: To compare the vowel and consonant identification ability of cochlear implant patients using a 6-channel continuous interleaved sampling (CIS) processor and of normal-hearing subjects using simulations of processors with two to nine channels. DESIGN: Subjects, 10 normal-hearing listeners and seven cochlear implant patients, were presented synthetic vowels in /bVt/ context, natural vowels produced by men, women, and girls in /hVd/ context, and consonants in /aCa/ context for identification. Stimuli for the normal-hearing subjects were pre-processed through simulations of implant processors with two to nine channels and were output as the sum of sinusoids at the center frequencies of the analysis filters. RESULTS: Five implant patients' scores fell within the range of normal performance with a 6-channel processor when the patients were tested with synthetic vowels. Four patients' scores fell within the range of normal with a 6-channel processor when the patients were tested with multitalker vowels. Five patients' scores fell within the range of normal for a 6-channel processor for the consonant feature "place of articulation." CONCLUSION: Signal processing technology for cochlear implants has matured sufficiently to allow some patients who use CIS processors and a small number of monopolar electrodes to achieve scores on tests of speech identification that are within the range of scores established by normal-hearing subjects listening to speech processed through a small number of channels.


Subject(s)
Cochlear Implantation , Deafness/surgery , Hearing/physiology , Speech Perception , Adult , Aged , Female , Humans , Male , Middle Aged , Phonetics
13.
J Acoust Soc Am ; 103(2): 1141-9, 1998 Feb.
Article in English | MEDLINE | ID: mdl-9479767

ABSTRACT

Five patients who used a six-channel, continuous interleaved sampling (CIS) cochlear implant were presented vowels, in two experiments, from a large sample of men, women, boys, and girls for identification. At issue in the first experiment was whether vowels from one speaker group, i.e., men, were more identifiable than vowels from other speaker groups. At issue in the second experiment was the role of the fifth and sixth channels in the identification of vowels from the different speaker groups. It was found in experiment 1 that (i) the vowels produced by men were easier to identify than vowels produced by any of the other speaker groups, (ii) vowels from women and boys were more difficult to identify than vowels from men but less difficult than vowels from girls, and (iii) vowels from girls were more difficult to identify than vowels from all other groups. In experiment 2 removal of channels 5 and 6 from the processor impaired the identification of vowels produced by women, boys and girls but did not impair the identification of vowels produced by men. The results of experiment 1 demonstrate that scores on tests of vowels produced by men overestimate the ability of patients to recognize vowels in the broader context of multi-talker communication. The results of experiment 2 demonstrate that channels 5 and 6 become more important for vowel recognition as the second formants of the speakers increase in frequency.


Subject(s)
Cochlear Implantation , Deafness/rehabilitation , Speech Perception , Adult , Age Factors , Aged , Child , Child, Preschool , Female , Humans , Male , Middle Aged , Sex Factors
14.
Am J Otol ; 18(6 Suppl): S113-4, 1997 Nov.
Article in English | MEDLINE | ID: mdl-9391623

ABSTRACT

OBJECTIVE: One goal was to determine for normal-hearing listeners the number of channels of stimulation necessary to achieve a high level of speech understanding. The second goal was to determine whether patients with a six-channel cochlear implant could achieve the same level of speech understanding as normal-hearing subjects listening to speech processed through six channels. METHODS: Speech signals were processed, for normal-hearing listeners, either in the manner of cochlear-implant processors with 2-9 fixed channels, or in the manner of a processor which picked, on each update cycle, 6 of 16 channels. RESULTS: For the most difficult test material eight fixed channels were necessary to achieve the level of performance achieved with the "n of m" processor. Some cochlear implant patients with a six-channel continuous interleaved sampling processor achieved the same level of performance as normal-hearing subjects listening to speech via six channels. CONCLUSIONS: A signal processor for cochlear implants with eight channels should produce the same level of intelligibility as a processor with many more channels. Processors using continuous interleaved sampling technology can provide a signal which results in the same level of speech understanding as normal, acoustic stimulation.


Subject(s)
Cochlear Implantation , Deafness/surgery , Hearing/physiology , Speech Perception , Electric Stimulation/instrumentation , Humans
15.
J Acoust Soc Am ; 102(5 Pt 1): 2993-6, 1997 Nov.
Article in English | MEDLINE | ID: mdl-9373986

ABSTRACT

Normally hearing listeners were presented with vowels, consonants, and sentences for identification through an acoustic simulation of a five-channel cochlear implant with electrodes separated by 4 mm (as in the Ineraid implant). The aim of the experiment was to simulate the effect of depth of electrode insertion on identification accuracy. Insertion depth was simulated by outputting sine waves from each channel of the processor at a frequency determined by the cochlear place of electrodes inserted 22-25 mm into the cochlea. The results indicate that simulated insertion depth had a significant effect on performance. Performance at 22- and 23-mm simulated insertion depths was always poorer than normal, and performance at 25 mm simulated insertion depth was, most generally, the same as normal. It is inferred from these results that, if insertion depth could be unconfounded from other coexisting factors in implant patients, then insertion depth would be found to affect speech identification performance significantly.


Subject(s)
Cochlear Implantation , Speech Perception , Electrodes , Female , Humans , Speech Discrimination Tests
16.
J Acoust Soc Am ; 102(1): 581-7, 1997 Jul.
Article in English | MEDLINE | ID: mdl-9228819

ABSTRACT

Vowel recognition was assessed for eight, cochlear implant patients who use the Ineraid's six-electrode array. Recognition was tested in three conditions: with the Ineraid after years of experience; with a CIS processor at fitting of the processor; and with the CIS processor after 1 month's experience. At the time of fitting of the CIS processor, vowel recognition was not superior to that with the Ineraid. Recognition improved significantly over the period of a month. At 1 month, performance was significantly better with the CIS processor than with the Ineraid. This outcome is interpreted to mean that remapping of the vowel space is necessary following fitting with the CIS processor and some of the remapping occurs over a time period of days or weeks, rather than hours. Vowel errors at one month could be accounted for by two mechanisms. One is that patients attended to low-frequency channels at expense of high-frequency channels, or could not use information in high-frequency channels. The second is that, for diphthongs, patients could not detect frequency change over the course of the utterance.


Subject(s)
Speech Perception , Voice Quality , Adult , Aged , Cochlear Implants , Deafness/rehabilitation , Humans , Middle Aged
17.
Ear Hear ; 18(2): 147-55, 1997 Apr.
Article in English | MEDLINE | ID: mdl-9099564

ABSTRACT

OBJECTIVE: To assess changes in speech intelligibility as a function of signal processing strategy and as a function of time for one of the first two Ineraid patients in the United States fitted with a continuous interleaved sampling (CIS) signal processor. DESIGN: In Experiment 1, the patient was fitted with a CIS processor and measures of speech intelligibility were taken over a period of 4 mo. These data were compared with data collected with the Ineraid. In Experiment 2, three new signal processing strategies were tested. Measures of speech intelligibility were taken at fitting and after a week's use of the processor. In Experiment 3, the number of channels in the processor was reduced to 5, 4, and 3. Each processor was tested at fitting and after a week's use of the processor. RESULTS: In Experiment 1, immediately on fitting, the CIS processor produced better speech intelligibility for consonants, vowels, and the CID sentences than did the Ineraid. Performance improved over periods ranging from 1 to 4 mo depending on the test material. In Experiment 2, two processors produced significantly better speech intelligibility than did other processors. Most generally, performance dropped slightly when a new processor was fitted and then improved over the course of week. All of the processors produced better speech intelligibility than did the Ineraid. In Experiment 3, five channels allowed similar levels of performance as did six channels. The effect of four and three channels varied as a function of test material. Four CIS channels allowed better performance than did the four analogue channels of the Ineraid. CONCLUSIONS: We conclude 1) that CIS processors can provide much better speech intelligibility than can the analogue processor of the Ineraid; 2) that many CIS strategies, not just one, will produce better speech intelligibility than will the Ineraid; 3) that for this patient, five channels can allow as high a level of word intelligibility as can six channels; 4) that when the number of CIS and analogue channels are equated (at four), the CIS strategy provides better speech intelligibility than does the Ineraid; and 5) that speech intelligibility with CIS processors improves over periods as short as a week and as long as several months after fitting of the processor.


Subject(s)
Correction of Hearing Impairment , Equipment Design , Hearing Aids , Speech Perception , Aged , Humans , Male
18.
J Acoust Soc Am ; 102(4): 2403-11, 1997 Oct.
Article in English | MEDLINE | ID: mdl-9348698

ABSTRACT

Vowels, consonants, and sentences were processed through software emulations of cochlear-implant signal processors with 2-9 output channels. The signals were then presented, as either the sum of sine waves at the center of the channels or as the sum of noise bands the width of the channels, to normal-hearing listeners for identification. The results indicate, as previous investigations have suggested, that high levels of speech understanding can be obtained using signal processors with a small number of channels. The number of channels needed for high levels of performance varied with the nature of the test material. For the most difficult material--vowels produced by men, women, and girls--no statistically significant differences in performance were observed when the number of channels was increased beyond 8. For the least difficult material--sentences--no statistically significant differences in performance were observed when the number of channels was increased beyond 5. The nature of the output signal, noise bands or sine waves, made only a small difference in performance. The mechanism mediating the high levels of speech recognition achieved with only few channels of stimulation may be the same one that mediates the recognition of signals produced by speakers with a high fundamental frequency, i.e., the levels of adjacent channels are used to determine the frequency of the input signal. The results of an experiment in which frequency information was altered but temporal information was not altered indicates that vowel recognition is based on information in the frequency domain even when the number of channels of stimulation is small.


Subject(s)
Cochlear Implants , Noise , Software , Speech Perception/physiology , Adult , Female , Humans , Male , Middle Aged , Phonetics
19.
J Acoust Soc Am ; 100(6): 3825-30, 1996 Dec.
Article in English | MEDLINE | ID: mdl-8969483

ABSTRACT

The perceptual salience of relative spectral change [Lahiri et al., J. Acoust. Soc. Am. 76, 391-404 (1984)] and formant transitions as cues to labial and alveolar/dental place of articulation was assessed in a conflicting cue paradigm. The prototype stimuli were produced by two English speakers. The stimuli with conflicting cues to place of articulation were created by altering the spectra of the signals so that the change in spectral energy from signal onset to voicing onset specified one place of articulation while the formant transitions specified the other place of articulation. Listeners' identification of these stimuli was determined principally by the information from formant transitions. This outcome provides no support for the view that the relative spectral change is a significant perceptual cue to stop consonant place of articulation.


Subject(s)
Lip/physiology , Speech/physiology , Humans , Phonetics , Speech Perception/physiology , Speech Production Measurement
20.
Ear Hear ; 17(4): 308-13, 1996 Aug.
Article in English | MEDLINE | ID: mdl-8862968

ABSTRACT

OBJECTIVE: In Experiment 1 the objective was to determine whether patients who have been implanted with the Ineraid electrode array perform better on tests of consonant identification when signals are processed through a continuous interleaved sampling (CIS) processor than when signals are processed through an analogue (Ineraid) processor. In Experiment 2 the objective was to determine, for patients using the CIS strategy, whether identification accuracy for stop consonant place of articulation could be improved by enhancing differences in the patterns of the signal processor channel outputs. DESIGN: In Experiment 1, 16 consonants were presented in VCV format for identification. In Experiment 1 the CIS patients evidence difficulty in identifying /p t k/. Therefore, in Experiment 2 the voiceless stop consonants were presented in two stimulus conditions. In one, the stimuli were unfiltered. In the other, the stimuli were individually filtered so as to enhance the differences in channel outputs for /p/, /t/, and /k/. RESULTS: In Experiment 1 the patients performed better with CIS processors than with analogue processors. In Experiment 2 the "enhanced" stimuli were identified with better accuracy than were the unfiltered stimuli. CONCLUSIONS: We confirm that Ineraid patients achieve higher scores on tests of consonant identification when using a CIS processor than when using an analogue processor. Errors in identification of stop consonant place of articulation, when using a CIS processor, are due to the similarity in the patterns of the processor's channel outputs. By showing that consonant intelligibility can be improved by filtering, we show that we have not reached the limit of speech understanding that can be supported by the population of neural elements remaining in our patients' auditory systems.


Subject(s)
Phonetics , Speech Perception , Adult , Aged , Equipment Design , Hearing Aids , Humans , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...