Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters











Publication year range
1.
Brain Res Cogn Brain Res ; 6(2): 121-34, 1997 Oct.
Article in English | MEDLINE | ID: mdl-9450605

ABSTRACT

We investigated two aspects of lexical organization in normal adults employing behavioral and electrophysiological indices of semantic priming, namely: (1) Is there evidence for differential processing of nouns and verbs? (2) Is there evidence for separate systems for processing of orthographic and phonologic representations of words? Reaction time (RT), N400 amplitude and latency were used to examine the effect of semantic priming on lexical access of auditorily and visually presented nouns and verbs. We found that the temporal patterns of primed RTs and N400 latencies differed for nouns and verbs, indicating a functional difference in processing. However, the absence of topographic differences in N400 between nouns and verbs did not support anatomically distinct representations of these word classes. By contrast, a modality-specific topography at N400, in addition to RT and N400 amplitude differences between auditory and visual conditions, supported the proposed separation of the orthographic and phonologic representations of words. The implications of the findings for general theories of lexical organization are discussed.


Subject(s)
Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Language , Mental Processes/physiology , Reaction Time/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Middle Aged , Photic Stimulation
2.
J Acoust Soc Am ; 96(4): 2101-7, 1994 Oct.
Article in English | MEDLINE | ID: mdl-7963024

ABSTRACT

Two experiments measured listeners' abilities to detect facial expression in unfamiliar speech in normal and whisper registers. Acoustic differences between speech produced with neutral or marked facial expression were also assessed. Experiment 1 showed that in a forced-choice identification task, listeners could accurately select frowned speech as such, and neutral speech as happier sounding than frowned speech in the same speakers. Listeners were able to judge frowning in the same speakers' whispered speech. Relative to neutral speech, frowning lowers formant frequencies and increases syllable duration. In both registers, judgments of frowning and its relative happiness were significantly poorer for lip-rounded vowels, suggesting that listeners may recover lip protrusion in making judgments. Experiment 2 replicated the finding [V. Tartter, Percept. Psychophys. 27, 24-27 (1980)] that listeners can select speech produced with a smile as happier sounding than neutral speech in normal register, and extended the findings to whisper register. Relative to neutral, smiling increased second formant frequency. Results are discussed with respect to nonverbal auditory emotion prototypes and with respect to the direct realist theory of speech perception.


Subject(s)
Emotions , Smiling , Speech Perception , Female , Humans , Male , Phonetics , Speech Acoustics
4.
J Acoust Soc Am ; 92(3): 1269-83, 1992 Sep.
Article in English | MEDLINE | ID: mdl-1401515

ABSTRACT

Vowel perception strategies were assessed for two "average" and one "star" single-channel 3M/House and three "average" and one "star" Nucleus 22-channel cochlear implant patients and six normal-hearing control subjects. All subjects were tested by computer with real and synthetic speech versions of [symbol: see text], presented randomly. Duration, fundamental frequency, and first, second, and third formant frequency cues to the vowels were the vowels were systematically manipulated. Results showed high accuracy for the normal-hearing subjects in all conditions but that of the first formant alone. "Average" single-channel patients classified only real speech [hVd] syllables differently from synthetic steady state syllables. The "star" single-channel patient identified the vowels at much better than chance levels, with a results pattern suggesting effective use of first formant and duration information. Both "star" and "average" Nucleus users showed similar response patterns, performing better than chance in most conditions, and identifying the vowels using duration and some frequency information from all three formants.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Phonetics , Speech Perception , Adolescent , Adult , Child , Child, Preschool , Female , Follow-Up Studies , Humans , Male , Middle Aged , Prosthesis Design , Reference Values , Sound Spectrography , Speech Discrimination Tests
5.
J Acoust Soc Am ; 92(3): 1310-23, 1992 Sep.
Article in English | MEDLINE | ID: mdl-1401518

ABSTRACT

The speech of a postlingually deafened preadolescent was recorded and analyzed while a single-electrode cochlear implant (3M/House) was in operation, on two occasions after it failed (1 day and 18 days) and on three occasions after stimulation of a multichannel cochlear implant (Nucleus 22) (1 day, 6 months, and 1 year). Listeners judged 3M/House tokens to be the most normal until the subject had one year's experience with the Nucleus device. Spectrograms showed less aspiration, better formant definition and longer final frication and closure duration post-Nucleus stimulation (6 MO. NUCLEUS and 1 YEAR NUCLEUS) relative to the 3M/House and no auditory feedback conditions. Acoustic measurements after loss of auditory feedback (1 DAY FAIL and 18 DAYS FAIL) indicated a constriction of vowel space. Appropriately higher fundamental frequency for stressed than unstressed syllables, an expansion of vowel space and improvement in some aspects of production of voicing, manner and place of articulation were noted one year post-Nucleus stimulation. Loss of auditory feedback results are related to the literature on the effects of postlingual deafness on speech. Nucleus and 3M/House effects on speech are discussed in terms of speech production studies of single-electrode and multichannel patients.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Speech Acoustics , Speech Production Measurement , Child , Feedback , Female , Humans , Prosthesis Design , Signal Processing, Computer-Assisted/instrumentation , Sound Spectrography/instrumentation , Speech Discrimination Tests , Speech Intelligibility , Voice Quality
6.
Ear Hear ; 13(3): 195-9, 1992 Jun.
Article in English | MEDLINE | ID: mdl-1397760

ABSTRACT

The ability to remove cochlear implants from children and subsequently reimplant a more complex device in the same ear was the concern of this single case study. A postlinguistically deafened child, J.L., received a single-channel cochlear implant 1 yr after contracting meningitis and suffering a profound bilateral sensorineural hearing loss. After 3 yr of successful implant use, J.L. suffered an internal coil failure. She was then explanted and reimplanted with a multichannel cochlear implant in the same ear. This case report details her speech perception skills with her single-channel cochlear implant, a vibrotactile aid, and a multichannel cochlear implant. Results from auditory perceptual measures suggest that the explantation/reimplantation process was technically feasible with no adverse effects on J.L.'s ability to utilize a more sophisticated device and to exceed her previous performance levels.


Subject(s)
Auditory Perception , Cochlear Implants , Hearing Loss, Sensorineural/rehabilitation , Acoustic Stimulation , Child , Child, Preschool , Female , Humans , Male , Sound , Speech Acoustics , Speech Discrimination Tests
7.
Percept Psychophys ; 49(4): 365-72, 1991 Apr.
Article in English | MEDLINE | ID: mdl-2030934

ABSTRACT

In the present experiments, the effect of whisper register on speech perception was measured. We assessed listeners' abilities to identify 10 vowels in [hVd] context pronounced by 3 male and 3 female speakers in normal and whisper registers. Results showed 82% average identification accuracy in whisper mode, approximately a 10% falloff in identification accuracy from normally phonated speech. In both modes, significant confusions of [o] for [a] occurred, with some additional significant confusions occurring in whisper mode among vowels adjacent in F1/F2 space. We also assessed listeners' abilities to match whispered syllables with normally phonated ones by the same speaker. Each trial contained the matching syllable and two foils whispered by speakers of the same sex as the speaker of the target. Identification performance was significantly better than chance across subjects, speakers, and vowels, with no listener achieving better than 96% performance. Acoustic analyses measured potential cues to speaker identity independent of register.


Subject(s)
Discrimination, Psychological , Phonetics , Speech Perception , Voice , Adult , Female , Humans , Male , Phonation
8.
J Acoust Soc Am ; 86(6): 2113-21, 1989 Dec.
Article in English | MEDLINE | ID: mdl-2600301

ABSTRACT

The speech of a profoundly postlingually deafened teenager was recorded before, immediately after, 3 months after, and 1 year after electrical stimulation with a Nucleus multichannel cochlear implant. Listener tests of target words revealed significant improvement in overall quality over the year. Spectrograms showed less aspiration and better definition of the lower formants. Acoustic measurements indicated immediate change in F0 and gradual changes in syllable duration and some aspects of voicing and manner of articulation. Vowel space shrank steadily over the year, with both first- and second-formant frequencies dropping. Prestimulation results are discussed relative to the literature on the speech of the congenitally hearing impaired. Effects of multichannel electrical stimulation on speech are compared with studies of single-electrode stimulation.


Subject(s)
Cochlear Implants , Deafness/physiopathology , Speech Disorders/physiopathology , Adolescent , Auditory Threshold , Electric Stimulation , Humans
9.
J Acoust Soc Am ; 86(5): 1678-83, 1989 Nov.
Article in English | MEDLINE | ID: mdl-2808917

ABSTRACT

Whispering is a common, natural way of reducing speech perceptibility, but whether and how whispering affects consonant identification and the acoustic features presumed important for it in normal speech perception are unknown. In this experiment, untrained listeners identified 18 different whispered initial consonants significantly better than chance in nonsense syllables. The phonetic features of place and manner of articulation and, to a lesser extent, voicing, were correctly identified. Confusion matrix and acoustic analyses indicated preservation of resonance characteristics for place and manner of articulation and suggested the use of burst, aspiration, or frication duration and intensity, and/or first-formant cutback for voicing decisions.


Subject(s)
Speech Perception/physiology , Adult , Humans
10.
J Acoust Soc Am ; 76(6): 1652-63, 1984 Dec.
Article in English | MEDLINE | ID: mdl-6520303

ABSTRACT

Three selective adaptation experiments were conducted to investigate whether intervocalic stops are perceived as the end of the preceding syllable or as the beginning of the following one. The pattern of adaptation effects (and just as importantly, noneffects) indicated that intervocalic stop consonants are perceptually more like syllable-initial than syllable-final ones. From this it might be concluded that the perceptual system breaks down a vowel-consonant-vowel (VCV) utterance into a V-CV sequence. However, the similarity of an intervocalic stop to a syllable-initial one is quite limited; the consonant in a VCV is apparently treated as essentially different from consonants in either VC or CV utterances. These results clarify, and perhaps complicate, the role of the syllable in models of the speech perception process.


Subject(s)
Adaptation, Physiological , Phonetics , Speech Perception , Humans , Speech Acoustics , Speech Perception/physiology
11.
Brain Lang ; 23(1): 74-85, 1984 Sep.
Article in English | MEDLINE | ID: mdl-6478194

ABSTRACT

Two dichotic listening experiments assess the lateralization of speaker identification in right-handed native English speakers. Stimuli were tokens of /ba/, /da/, /pa/, and /ta/ pronounced by two male and two female speakers. In Experiment 1, subjects identified either the two consonants in dichotic stimuli spoken by the same person, or identified two speakers in dichotic tokens of the same syllable. In Experiment 2 new subjects identified the two consonants or the two speakers in pairs in which both consonant and speaker distinguished the pair members. Both experiments yielded significant right-ear advantages for consonant identification and nonsignificant ear differences for speaker identification. Fewer errors were made for speaker judgments than for consonant judgments, and for speaker judgments for pairs in which the speakers were of the same sex than for pairs in which speaker sex differed. It is concluded that, as in vowel identification, neither hemisphere clearly dominates in dichotic speaker identification, perhaps because of minor information loss in the ipsilateral pathways.


Subject(s)
Dominance, Cerebral , Phonetics , Speech Perception , Attention , Humans , Semantics , Speech Intelligibility
12.
Brain Lang ; 22(1): 128-49, 1984 May.
Article in English | MEDLINE | ID: mdl-6202359

ABSTRACT

Two experiments assessed the abilities of aphasic patients and nonaphasic controls to perceive place of articulation in stop consonants. Experiment I explored labeling and discrimination of [ba, da, ga] continua varying in formant transitions with or without an appropriate burst onset appended to the transitions. Results showed general difficulty in perceiving place of articulation for the aphasic patients. Regardless of diagnostic category or auditory language comprehension score, discrimination ability was independent of labeling ability, and discrimination functions were similar to normals even in the context of failure to reliably label the stimuli. Further there was less variability in performance for stimuli with bursts than without bursts. Experiment II measured the effects of lengthening the formant transitions on perception of place of articulation in stop consonants and on the perception of auditory analogs to the speech stimuli. Lengthening the transitions failed to improve performance for either the speech or nonspeech stimuli, and in some cases, reduced performance level. No correlation was observed between the patient's ability to perceive the speech and nonspeech stimuli.


Subject(s)
Aphasia/psychology , Phonation , Phonetics , Speech Perception , Voice , Cues , Discrimination Learning , Humans
13.
J Acoust Soc Am ; 74(3): 715-25, 1983 Sep.
Article in English | MEDLINE | ID: mdl-6630727

ABSTRACT

Acoustic analyses of vowel-consonant-vowel (VCV) utterances indicate that they generally include formant transitions from the first vowel into a period of closure (VC transitions), and transitions out of the closure into the second vowel (CV transitions). Three experiments investigated the perceptual importance of the VC transitions, the CV transitions, and the closure period in identification of medial stop consonants varying in place of articulation. Experiment 1 compared identification of members of synthetic VC and CV continua with those from VCV series made by concatenating corresponding VC and CV stimuli using various closure durations. Experiment 2 examined identification of VCV stimuli constructed with only VC, only CV, or both VC and CV transitions; again closure duration was systematically varied. Experiment 3 correlated CV and VC identification with identification of VCV stimuli. Neither closure duration nor formant transition structure (i.e., only VC, only CV, or both) had an independent effect on identification. Instead, the formant structure and closure duration together strongly affected stop identification. When both VC and CV transitions were present, the CV transitions contributed somewhat more to identification of medial stops with short closures, than the VC transitions did. With longer closure durations, neither set of transitions appeared to determine perceived place of articulation in any simple way. Overall, the data indicate that the perception of a medial consonant is more than simply a (weighted) sum of its parts.


Subject(s)
Speech Acoustics , Speech Perception , Speech , Humans , Male
15.
Nature ; 289(5799): 676-8, 1981 Feb 19.
Article in English | MEDLINE | ID: mdl-7464933

ABSTRACT

Many deaf people in the USA communicate in American sign language (ASL), which has an expressive capacity equivalent to that of spoken language, although structurally independent of spoken languages. It comprises hand and arm movements often combined with particular facial gestures; together these are sufficiently precise to transmit all the complexities and innuendoes of language. Here we demonstrate that fluent ASL users can communicate easily when all they see of each other is an array of 27 light spots strategically placed on the hands and face. The results indicate the salient locations in normal sign perception, and suggest that it is feasible to transmit sign using the bandwidth of one telephone line rather than a much more expensive TV line.


Subject(s)
Form Perception , Manual Communication , Pattern Recognition, Visual , Sign Language , Hand , Humans , Telephone/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL