Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 89
Filter
Add more filters










Publication year range
1.
Ear Hear ; 26(4): 389-408, 2005 Aug.
Article in English | MEDLINE | ID: mdl-16079634

ABSTRACT

OBJECTIVE: To determine the effects of length of cochlear implant use and other demographic factors on the development of sustained visual attention in prelingually deaf children and to examine the relations between performance on a test of sustained visual attention and audiological outcome measures in this population. DESIGN: A retrospective analysis of data collected before cochlear implantation and over several years after implantation. Two groups of prelingually deaf children, one >6 years old (N = 41) and one <6 years old (N = 47) at testing, were given an age-appropriate Continuous Performance Task (CPT). In both groups, children monitored visually presented numbers for several minutes and responded whenever a designated number appeared. Hit rate, false alarm rate, and signal detection parameters were dependent measures of sustained visual attention. We tested for effects of a number of patient variables on CPT performance. Multiple regression analyses were conducted to determine if CPT scores were related to performance on several audiological outcome measures. RESULTS: In both groups of children, mean CPT performance was low compared with published norms for normal-hearing children, and performance improved as a function of length of cochlear implant use and chronological age. The improvement in performance was manifested as an increase in hit rate and perceptual sensitivity over time. In the younger age group, a greater number of active electrodes predicted better CPT performance. Results from regression analyses indicated a relationship between CPT response criterion and receptive language in the younger age group. However, we failed to uncover any other relations between CPT performance and speech and language outcome measures. CONCLUSIONS: Our findings suggest that cochlear implantation in prelingually deaf children leads to improved performance on a test of sustained visual processing of numbers over 2 or more years of cochlear implant use. In preschool-age children who use cochlear implants, individuals who are more conservative responders on the CPT show higher receptive language scores than do individuals with more impulsive response patterns. Theoretical accounts of these findings are discussed, including cross-modal reorganization of visual attention and enhanced phonological encoding of visually presented numbers.


Subject(s)
Attention , Cochlear Implants , Speech Perception/physiology , Visual Perception , Age Factors , Analysis of Variance , Child , Child, Preschool , Cochlear Implantation , Deafness/rehabilitation , Female , Follow-Up Studies , Humans , Lipreading , Male , Retrospective Studies , Task Performance and Analysis
2.
Laryngoscope ; 115(4): 595-600, 2005 Apr.
Article in English | MEDLINE | ID: mdl-15805866

ABSTRACT

OBJECTIVES/HYPOTHESIS: Individual speech and language outcomes of deaf children with cochlear implants (CIs) are quite varied. Individual differences in underlying cognitive functions may explain some of this variance. The current study investigated whether behavioral inhibition skills of deaf children were related to performance on a range of audiologic outcome measures. DESIGN: Retrospective analysis of longitudinal data collected from prelingually and profoundly deaf children who used CIs. METHODS: Behavioral inhibition skills were measured using a visual response delay task that did not require hearing. Speech and language measures were obtained from behavioral tests administered at 1-year intervals of CI use. RESULTS: Female subjects showed higher response delay scores than males. Performance increased with length of CI use. Younger children showed greater improvement in performance as a function of device use than older children. No other subject variable had a significant effect on response delay score. A series of multiple regression analyses revealed several significant relations between delay task performance and open set word recognition, vocabulary, receptive language, and expressive language scores. CONCLUSIONS: The present results suggest that CI experience affects visual information processing skills of prelingually deaf children. Furthermore, the observed pattern of relations suggests that speech and language processing skills are closely related to the development of response delay skills in prelingually deaf children with CIs. These relations may reflect underlying verbal encoding skills, subvocal rehearsal skills, and verbally mediated self-regulatory skills. Clinically, visual response delay tasks may be useful in assessing behavioral and cognitive development in deaf children after implantation.


Subject(s)
Child Behavior/classification , Cochlear Implants , Deafness/surgery , Inhibition, Psychological , Age Factors , Child , Child Development/physiology , Child Language , Child, Preschool , Deafness/psychology , Female , Follow-Up Studies , Humans , Longitudinal Studies , Male , Retrospective Studies , Sex Factors , Speech/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Treatment Outcome , Vocabulary
3.
Ear Hear ; 22(5): 395-411, 2001 Oct.
Article in English | MEDLINE | ID: mdl-11605947

ABSTRACT

OBJECTIVE: The purpose of this study was to examine working memory for sequences of auditory and visual stimuli in prelingually deafened pediatric cochlear implant users with at least 4 yr of device experience. DESIGN: Two groups of 8- and 9-yr-old children, 45 normal-hearing and 45 hearing-impaired users of cochlear implants, completed a novel working memory task requiring memory for sequences of either visual-spatial cues or visual-spatial cues paired with auditory signals. In each sequence, colored response buttons were illuminated either with or without simultaneous auditory presentation of verbal labels (color-names or digit-names). The child was required to reproduce each sequence by pressing the appropriate buttons on the response box. Sequence length was varied and a measure of memory span corresponding to the longest list length correctly reproduced under each set of presentation conditions was recorded. Additional children completed a modified task that eliminated the visual-spatial light cues but that still required reproduction of auditory color-name sequences using the same response box. Data from 37 pediatric cochlear implant users were collected using this modified task. RESULTS: The cochlear implant group obtained shorter span scores on average than the normal-hearing group, regardless of presentation format. The normal-hearing children also demonstrated a larger "redundancy gain" than children in the cochlear implant group-that is, the normal-hearing group displayed better memory for auditory-plus-lights sequences than for the lights-only sequences. Although the children with cochlear implants did not use the auditory signals as effectively as normal-hearing children when visual-spatial cues were also available, their performance on the modified memory task using only auditory cues showed that some of the children were capable of encoding auditory-only sequences at a level comparable with normal-hearing children. CONCLUSIONS: The finding of smaller redundancy gains from the addition of auditory cues to visual-spatial sequences in the cochlear implant group as compared with the normal-hearing group demonstrates differences in encoding or rehearsal strategies between these two groups of children. Differences in memory span between the two groups even on a visual-spatial memory task suggests that atypical working memory development irrespective of input modality may be present in this clinical population.


Subject(s)
Cochlear Implantation , Deafness/therapy , Memory/physiology , Space Perception/physiology , Vocabulary , Audiometry, Pure-Tone , Child , Female , Humans , Language Tests , Male
4.
Ear Hear ; 22(5): 412-9, 2001 Oct.
Article in English | MEDLINE | ID: mdl-11605948

ABSTRACT

OBJECTIVE: The purpose of this case study was to investigate multimodal perceptual coherence in speech perception in an exceptionally good postlingually deafened cochlear implant user. His ability to perceive sinewave replicas of spoken sentences, and the extent to which he integrated sensory information from multimodal sources was compared with a group of adult normal-hearing listeners to determine the contribution of natural auditory quality in the use of electrocochlear stimulation. DESIGN: The patient, "Mr. S," transcribed sinewave sentences of natural speech under audio-only (AO), visual-only (VO), and audio-visual (A+V) conditions. His performance was compared with the data collected from 25 normal-hearing adults. RESULTS: Although normal-hearing participants performed better than Mr. S for AO sentences (65% versus 53% syllables correct), Mr. S was superior for VO sentences (43% versus 18%). For A+V sentences, Mr. S's performance was comparable with the normal-hearing group (90% versus 86%). An estimate of the amount of visual enhancement, R, obtained from seeing the talker's face showed that Mr. S derived a larger gain from the additional visual information than the normal-hearing controls (78% versus 59%). CONCLUSIONS: The findings from this case study of an exceptionally good cochlear implant user suggest that he is perceiving the sinewave sentences on the basis of coherent variation from multimodal sensory inputs, and not on the basis of lipreading ability alone. Electrocochlear stimulation is evidently useful in multimodal contexts because it preserves dynamic speech-like variation, despite the absence of speech-like auditory qualities.


Subject(s)
Cochlear Implantation , Deafness/therapy , Speech Perception/physiology , Visual Perception/physiology , Adult , Humans , Male
5.
Ear Hear ; 22(3): 236-51, 2001 Jun.
Article in English | MEDLINE | ID: mdl-11409859

ABSTRACT

OBJECTIVE: Although there has been a great deal of recent empirical work and new theoretical interest in audiovisual speech perception in both normal-hearing and hearing-impaired adults, relatively little is known about the development of these abilities and skills in deaf children with cochlear implants. This study examined how prelingually deafened children combine visual information available in the talker's face with auditory speech cues provided by their cochlear implants to enhance spoken language comprehension. DESIGN: Twenty-seven hearing-impaired children who use cochlear implants identified spoken sentences presented under auditory-alone and audiovisual conditions. Five additional measures of spoken word recognition performance were used to assess auditory-alone speech perception skills. A measure of speech intelligibility was also obtained to assess the speech production abilities of these children. RESULTS: A measure of audiovisual gain, "Ra," was computed using sentence recognition scores in auditory-alone and audiovisual conditions. Another measure of audiovisual gain, "Rv," was computed using scores in visual-alone and audiovisual conditions. The results indicated that children who were better at recognizing isolated spoken words through listening alone were also better at combining the complementary sensory information about speech articulation available under audiovisual stimulation. In addition, we found that children who received more benefit from audiovisual presentation also produced more intelligible speech, suggesting a close link between speech perception and production and a common underlying linguistic basis for audiovisual enhancement effects. Finally, an examination of the distribution of children enrolled in Oral Communication (OC) and Total Communication (TC) indicated that OC children tended to score higher on measures of audiovisual gain, spoken word recognition, and speech intelligibility. CONCLUSIONS: The relationships observed between auditory-alone speech perception, audiovisual benefit, and speech intelligibility indicate that these abilities are not based on independent language skills, but instead reflect a common source of linguistic knowledge, used in both perception and production, that is based on the dynamic, articulatory motions of the vocal tract. The effects of communication mode demonstrate the important contribution of early sensory experience to perceptual development, specifically, language acquisition and the use of phonological processing skills. Intervention and treatment programs that aim to increase receptive and productive spoken language skills, therefore, may wish to emphasize the inherent cross-correlations that exist between auditory and visual sources of information in speech perception.


Subject(s)
Cochlear Implantation , Deafness/therapy , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Child, Preschool , Cues , Humans , Infant , Time Factors
6.
J Acoust Soc Am ; 109(5 Pt 1): 2135-45, 2001 May.
Article in English | MEDLINE | ID: mdl-11386565

ABSTRACT

Cochlear implant (CI) users differ in their ability to perceive and recognize speech sounds. Two possible reasons for such individual differences may lie in their ability to discriminate formant frequencies or to adapt to the spectrally shifted information presented by cochlear implants, a basalward shift related to the implant's depth of insertion in the cochlea. In the present study, we examined these two alternatives using a method-of-adjustment (MOA) procedure with 330 synthetic vowel stimuli varying in F1 and F2 that were arranged in a two-dimensional grid. Subjects were asked to label the synthetic stimuli that matched ten monophthongal vowels in visually presented words. Subjects then provided goodness ratings for the stimuli they had chosen. The subjects' responses to all ten vowels were used to construct individual perceptual "vowel spaces." If CI users fail to adapt completely to the basalward spectral shift, then the formant frequencies of their vowel categories should be shifted lower in both F1 and F2. However, with one exception, no systematic shifts were observed in the vowel spaces of CI users. Instead, the vowel spaces differed from one another in the relative size of their vowel categories. The results suggest that differences in formant frequency discrimination may account for the individual differences in vowel perception observed in cochlear implant users.


Subject(s)
Adaptation, Physiological/physiology , Cochlea/physiopathology , Deafness/physiopathology , Deafness/rehabilitation , Space Perception/physiology , Speech Perception/physiology , Adolescent , Adult , Aged , Female , Humans , Male , Middle Aged , Phonetics , Speech Discrimination Tests
7.
Ear Hear ; 21(1): 70-8, 2000 Feb.
Article in English | MEDLINE | ID: mdl-10708075

ABSTRACT

Over the past few years, there has been increased interest in studying some of the cognitive factors that affect speech perception performance of cochlear implant patients. In this paper, I provide a brief theoretical overview of the fundamental assumptions of the information-processing approach to cognition and discuss the role of perception, learning, and memory in speech perception and spoken language processing. The information-processing framework provides researchers and clinicians with a new way to understand the time-course of perceptual and cognitive development and the relations between perception and production of spoken language. Directions for future research using this approach are discussed including the study of individual differences, predicting success with a cochlear implant from a set of cognitive measures of performance and developing new intervention strategies.


Subject(s)
Cochlear Implants , Cognition , Speech Perception/physiology , Child , Humans , Language Development , Learning/physiology , Memory/physiology
10.
Ann Otol Rhinol Laryngol Suppl ; 185: 68-70, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11141011

ABSTRACT

On the basis of the good predictions for phonemes correct, we conclude that closed-set feature identification may successfully predict phoneme identification in an open-set word recognition task. For word recognition, however, the PCM model underpredicted observed performance, and the addition of a mental lexicon (ie, the SPAMR model) was needed for a good match to data averaged across 7 adults with CIs. The predictions for words correct improved with the addition of a lexicon, providing support for the hypothesis that lexical information is used in open-set spoken word recognition by CI users. The perception of words more complex than CNCs is also likely to require lexical knowledge (Frisch et al, this supplement, pp 60-62) In the future, we will use the performance off individual CI users on psychophysical tasks to generate predicted vowel and consonant confusion matrices to be used to predict open-set spoken word recognition.


Subject(s)
Cochlear Implants , Speech Perception , Adult , Humans , Models, Theoretical
12.
Ear Hear ; 21(6): 578-89, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11132784

ABSTRACT

OBJECTIVE: Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. DESIGN: A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. RESULTS: Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. CONCLUSIONS: Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing.


Subject(s)
Cochlear Implants , Hearing Loss, Sensorineural/rehabilitation , Speech Perception/physiology , Case-Control Studies , Child , Computer Simulation , Hearing Loss, Sensorineural/physiopathology , Humans , Psycholinguistics
13.
Psychol Sci ; 11(2): 153-8, 2000 Mar.
Article in English | MEDLINE | ID: mdl-11273423

ABSTRACT

Although cochlear implants improve the ability of profoundly deaf children to understand speech, critics claim that the published literature does not document even a single case of a child who has developed a linguistic system based on input from an implant. Thus, it is of clinical and scientific importance to determine whether cochlear implants facilitate the development of English language skills. The English language skills of prelingually deaf children with cochlear implants were measured before and after implantation. We found that the rate of language development after implantation exceeded that expected from unimplanted deaf children (p < .001) and was similar to that of children with normal hearing. Despite a large amount of individual variability, the best performers in the implanted group seem to be developing an oral linguistic system based largely on auditory input obtained from a cochlear implant.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Language Development Disorders/rehabilitation , Child , Child, Preschool , Female , Humans , Infant , Language Tests , Male , Prognosis , Speech Perception
14.
J Acoust Soc Am ; 106(4 Pt 1): 2074-85, 1999 Oct.
Article in English | MEDLINE | ID: mdl-10530030

ABSTRACT

In order to gain insight into the interplay between the talker-, listener-, and item-related factors that influence speech perception, a large multi-talker database of digitally recorded spoken words was developed, and was then submitted to intelligibility tests with multiple listeners. Ten talkers produced two lists of words at three speaking rates. One list contained lexically "easy" words (words with few phonetically similar sounding "neighbors" with which they could be confused), and the other list contained lexically "hard" words (words with many phonetically similar sounding "neighbors"). An analysis of the intelligibility data obtained with native speakers of English (experiment 1) showed a strong effect of lexical similarity. Easy words had higher intelligibility scores than hard words. A strong effect of speaking rate was also found whereby slow and medium rate words had higher intelligibility scores than fast rate words. Finally, a relationship was also observed between the various stimulus factors whereby the perceptual difficulties imposed by one factor, such as a hard word spoken at a fast rate, could be overcome by the advantage gained through the listener's experience and familiarity with the speech of a particular talker. In experiment 2, the investigation was extended to another listener population, namely, non-native listeners. Results showed that the ability to take advantage of surface phonetic information, such as a consistent talker across items, is a perceptual skill that transfers easily from first to second language perception. However, non-native listeners had particular difficulty with lexically hard words even when familiarity with the items was controlled, suggesting that non-native word recognition may be compromised when fine phonetic discrimination at the segmental level is required. Taken together, the results of this study provide insight into the signal-dependent and signal-independent factors that influence spoken language processing in native and non-native listeners.


Subject(s)
Language , Speech Perception/physiology , Vocabulary , Adult , Female , Humans , Male , Phonetics
15.
Am J Otol ; 20(5): 596-601, 1999 Sep.
Article in English | MEDLINE | ID: mdl-10503581

ABSTRACT

OBJECTIVE: The purpose of this study was to determine whether similar cortical regions are activated by speech signals in profoundly deaf patients who have received a multichannel cochlear implant (CI) or auditory brain stem implant (ABI) as in normal-hearing subjects. STUDY DESIGN: Positron emission tomography (PET) studies were performed using a variety of discrete stimulus conditions. Images obtained were superimposed on standard anatomic magnetic resonance imaging (MRI) for the CI subjects. The PET images were superimposed on the ABI subject's own MRI. SETTING: Academic, tertiary referral center. PATIENTS: Five subjects who have received a multichannel CI and one who had received an ABI. INTERVENTION: Multichannel CI and ABI. MAIN OUTCOME MEASURE: PET images. RESULTS: Similar cortical regions are activated by speech stimuli in subjects who have received an auditory prosthesis. CONCLUSIONS: Neuroimaging provides a new approach to the study of speech processing in CI and ABI subjects.


Subject(s)
Auditory Cortex/surgery , Brain Stem/surgery , Cochlear Implants , Deafness/diagnostic imaging , Deafness/surgery , Electrodes, Implanted , Tomography, Emission-Computed , Acoustic Stimulation , Adult , Case-Control Studies , Evoked Potentials, Auditory, Brain Stem , Female , Humans , Male , Middle Aged , Speech Perception
16.
Percept Psychophys ; 61(5): 977-85, 1999 Jul.
Article in English | MEDLINE | ID: mdl-10499009

ABSTRACT

Previous work from our laboratories has shown that monolingual Japanese adults who were given intensive high-variability perceptual training improved in both perception and production of English /r/-/l/ minimal pairs. In this study, we extended those findings by investigating the long-term retention of learning in both perception and production of this difficult non-native contrast. Results showed that 3 months after completion of the perceptual training procedure, the Japanese trainees maintained their improved levels of performance of the perceptual identification task. Furthermore, perceptual evaluations by native American English listeners of the Japanese trainees' pretest, posttest, and 3-month follow-up speech productions showed that the trainees retained their long-term improvements in the general quality, identifiability, and overall intelligibility of their English/r/-/l/ word productions. Taken together, the results provide further support for the efficacy of high-variability laboratory speech sound training procedures, and suggest an optimistic outlook for the application of such procedures for a wide range of "special populations."


Subject(s)
Language , Learning/physiology , Retention, Psychology/physiology , Speech Perception/physiology , Teaching , Adult , Female , Humans , Male , Phonetics , Speech Production Measurement
17.
Ear Hear ; 20(4): 363-71, 1999 Aug.
Article in English | MEDLINE | ID: mdl-10466571

ABSTRACT

OBJECTIVE: The Phonetically Balanced Kindergarten (PBK) Test (Haskins, Reference Note 2) has been used for almost 50 yr to assess spoken word recognition performance in children with hearing impairments. The test originally consisted of four lists of 50 words, but only three of the lists (lists 1, 3, and 4) were considered "equivalent" enough to be used clinically with children. Our goal was to determine if the lexical properties of the different PBK lists could explain any differences between the three "equivalent" lists and the fourth PBK list (List 2) that has not been used in clinical testing. DESIGN: Word frequency and lexical neighborhood frequency and density measures were obtained from a computerized database for all of the words on the four lists from the PBK Test as well as the words from a single PB-50 (Egan, 1948) word list. RESULTS: The words in the "easy" PBK list (List 2) were of higher frequency than the words in the three "equivalent" lists. Moreover, the lexical neighborhoods of the words on the "easy" list contained fewer phonetically similar words than the neighborhoods of the words on the other three "equivalent" lists. CONCLUSIONS: It is important for researchers to consider word frequency and lexical neighborhood frequency and density when constructing word lists for testing speech perception. The results of this computational analysis of the PBK Test provide additional support for the proposal that spoken words are recognized "relationally" in the context of other phonetically similar words in the lexicon. Implications of using open-set word recognition tests with children with hearing impairments are discussed with regard to the specific vocabulary and information processing demands of the PBK Test.


Subject(s)
Speech Discrimination Tests , Child, Preschool , Hearing Disorders/diagnosis , Humans
18.
Brain Lang ; 68(1-2): 306-11, 1999.
Article in English | MEDLINE | ID: mdl-10433774

ABSTRACT

Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed.


Subject(s)
Cognition/physiology , Speech , Vocabulary , Humans , Phonetics
19.
Hear Res ; 132(1-2): 34-42, 1999 Jun.
Article in English | MEDLINE | ID: mdl-10392545

ABSTRACT

Functional neuroimaging with positron emission tomography (PET) was used to compare the brain activation patterns of normal-hearing (NH) with postlingually deaf, cochlear-implant (CI) subjects listening to speech and nonspeech signals. The speech stimuli were derived from test batteries for assessing speech-perception performance of hearing-impaired subjects with different sensory aids. Subjects were scanned while passively listening to monaural (right ear) stimuli in five conditions: Silent Baseline, Word, Sentence, Time-reversed Sentence, and Multitalker Babble. Both groups showed bilateral activation in superior and middle temporal gyri to speech and backward speech. However, group differences were observed in the Sentence compared to Silence condition. CI subjects showed more activated foci in right temporal regions, where lateralized mechanisms for prosodic (pitch) processing have been well established; NH subjects showed a focus in the left inferior frontal gyrus (Brodmann's area 47), where semantic processing has been implicated. Multitalker Babble activated auditory temporal regions in the CI group only. Whereas NH listeners probably habituated to this multitalker babble, the CI listeners may be using a perceptual strategy that emphasizes 'coarse' coding to perceive this stimulus globally as speechlike. The group differences provide the first neuroimaging evidence suggesting that postlingually deaf CI and NH subjects may engage differing perceptual processing strategies under certain speech conditions.


Subject(s)
Brain/diagnostic imaging , Brain/physiology , Cochlear Implants , Hearing/physiology , Phonetics , Speech Perception/physiology , Tomography, Emission-Computed , Adult , Cerebrovascular Circulation/physiology , Female , Humans , Male , Pilot Projects , Reference Values
20.
Percept Psychophys ; 61(2): 206-19, 1999 Feb.
Article in English | MEDLINE | ID: mdl-10089756

ABSTRACT

This study investigated the encoding of the surface form of spoken words using a continuous recognition memory task. The purpose was to compare and contrast three sources of stimulus variability--talker, speaking rate, and overall amplitude--to determine the extent to which each source of variability is retained in episodic memory. In Experiment 1, listeners judged whether each word in a list of spoken words was "old" (had occurred previously in the list) or "new." Listeners were more accurate at recognizing a word as old if it was repeated by the same talker and at the same speaking rate; however, there was no recognition advantage for words repeated at the same overall amplitude. In Experiment 2, listeners were first asked to judge whether each word was old or new, as before, and then they had to explicitly judge whether it was repeated by the same talker, at the same rate, or at the same amplitude. On the first task, listeners again showed an advantage in recognition memory for words repeated by the same talker and at same speaking rate, but no advantage occurred for the amplitude condition. However, in all three conditions, listeners were able to explicitly detect whether an old word was repeated by the same talker, at the same rate, or at the same amplitude. These data suggest that although information about all three properties of spoken words is encoded and retained in memory, each source of stimulus variation differs in the extent to which it affects episodic memory for spoken words.


Subject(s)
Loudness Perception , Mental Recall , Speech Perception , Verbal Behavior , Adult , Association Learning , Attention , Female , Humans , Male , Psychoacoustics , Reaction Time
SELECTION OF CITATIONS
SEARCH DETAIL
...