ABSTRACT
Auditory sensitivity and processing ability were evaluated in a patient who suffered from hyperacusis, difficulty understanding speech, withdrawn depression, lethargy, and hypersensitivity to touch, pressure, and light. Treatment with fluvoxamine and fluoxetine (selective serotonin reuptake inhibitors) reversibly alleviated complaints. Testing while medicated and unmedicated (after voluntary withdrawal from medication for several weeks) revealed no difference in pure-tone thresholds, speech thresholds, word recognition scores, tympanograms, or acoustic reflex thresholds. Medicated SCAN-A (a screening test for central auditory processing disorders) results were normal, and unmedicated results were abnormal. Unmedicated transient otoacoustic emissions and auditory brainstem response waves I, III, and V were significantly larger bilaterally. Uncomfortable loudness levels indicated greater tolerance during the medicated condition. Central processing and vigilance were evaluated with analog-synthesized three-formant consonant-vowel syllables. While medicated, responses to stimuli at each ear revealed well-defined, labeling crossovers of about 90 msec. Vowel identification matched normal subject responses; labeling of /gE/jE/ and /bE/wE/ continua was well defined but all crossover points differed from normals (p < .0001). During unmedicated testing, responses to /gE/jE/ began at medicated levels but approached chance levels for the entire continuum within 10 min; labeling of /bE/wE/ was consistent with medicated responses throughout with earlier than normal crossover points.
Subject(s)
Auditory Perception/drug effects , Selective Serotonin Reuptake Inhibitors/pharmacology , Adult , Auditory Threshold/drug effects , Depressive Disorder, Major/drug therapy , Evoked Potentials, Auditory, Brain Stem/drug effects , Female , Humans , Reflex, Acoustic/drug effectsABSTRACT
The imitation and recognition ability of brain-damaged and normal subjects was tested for 30 pairs of semantically matched ASL signs and corresponding Amer-Ind gestures. Subjects were rated according to severity and site of lesion. They were 6 nonaphasic, right-hemisphere brain-damaged subjects, 12 aphasic subjects, and 12 non-brain-damaged geriatric subjects. Results indicated that the Amer-Ind gestures were significantly easier to imitate and to recognize than the matched ALS signs. The relationships between these gestural abilities and severity of aphasia, site of lesion, Amer-Ind transparency ratings, and subjects' performance on a standardized aphasia test are outlined. The theoretical implications that concern the neural systems which mediate spoken and limb gestures are discussed.
Subject(s)
Aphasia/psychology , Imitative Behavior , Manual Communication , Sign Language , Aged , Brain Damage, Chronic/psychology , Dominance, Cerebral , Female , Humans , Male , Middle Aged , Neuropsychological Tests , SemanticsABSTRACT
Mandibular displacement during /s/ production was monitored via a mercury strain gauge which was taped to the face of two normally articulating and six /s/-misarticulating children. Simultaneous audio and jaw displacement visicorder traces were produced from an FM-tape recording of each experimental session, and were subsequently analyzed. Results indicated that various /s/-misarticulating subgroups exhibit different mandibular positions during /s/-production, and phonetic contextual effects upon mandibular position also vary by articulatory type.
Subject(s)
Articulation Disorders/physiopathology , Mandible/physiopathology , Child , Humans , Male , Mouth/physiopathology , Phonetics , Speech AcousticsABSTRACT
In this study, 22 children, ages 6:0 to 6:11, who misarticulated word-initial [r] as [w], were compared to 13, age-matched normally articulating children for their ability to identify and discriminate seven synthetic stimuli representing an acoustic continuum between [we] and [re]. Discrimination was tested among 3-step continuum stimulus pairs using the 4IAX paradigm. All of the control children demonstrated a single, sharp phonemic boundary during identification and higher between-phoneme than within-phoneme discrimination ability. Most of the misarticulating children demonstrated abnormal identification functions, with many showing only chance-level responses. Discrimination ability of the misarticulating children was generally poorer than that of the normally articulating children. Furthermore, discrimination ability of children in both groups was largely predictable from their identification performance, assuming categorical perception of these stimuli. Results indicate that a majority of the 6-year-old [r]-misarticulating children have failed to phonemically distinguish /r/ from /w/. These results call into question the use of the liquid gliding process as a psychological processing description of the misarticulation of these children.
Subject(s)
Articulation Disorders/psychology , Phonetics , Speech Perception , Child , Female , Humans , Male , Speech AcousticsABSTRACT
6 language-impaired misarticulating and 6 normal kindergarten children produced and perceived differences in word-initial stop consonant voicing. Individuals' productive and perceptual phonemic boundaries were similar. No statistically reliable differences were noted between the groups' mean productive or perceptual boundaries. Individual exceptions suggest that some misarticulating , language-impaired children may be inordinately challenged by synthetic speech stimuli or may pass through a developmental stage in which perceptual ability outstrips productive ability.
Subject(s)
Articulation Disorders/psychology , Speech Acoustics , Speech Perception , Speech , Child, Preschool , Cues , Humans , PhoneticsABSTRACT
Twelve children who consistently misarticulated consonant [r] and five children who correctly articulated [r] were recorded while repeating sentences which differed only in a single (r)-(w) contrast. All (r) and (w) productions were spectrographically analyzed. Error productions were judged for their similarity to [w]. Each child identified all of the recorded sentences via a picture-pointing task. Misarticulated [r] was identified as (w) at above chance levels only by the children who did not misarticulate [r]. The subject groups did not differ in their perception of correctly articulated (r) and (w) phones. Children whose misarticulated [r] phones were judged to be (w)-like were most likely to misperceive their own productions of (r). Children whose misarticulated [r] productions were characterized by higher second formant frequencies were better able to identify their productions of (r). Results suggest that a subpopulation of children who misarticulate [r] may mark it acoustically in a nonstandard manner.
Subject(s)
Articulation Disorders/psychology , Speech Perception , Child , Humans , Phonetics , Sound Spectrography , Speech Acoustics , Speech Discrimination TestsABSTRACT
Simultaneous measurements were made of voice, oral air flow, and nasal air flow for two speakers producing seven repetitions of 12 differing contexts containing Vowel + Nasal + Oral Consonant sequences in a search for the temporal pattern of nasal coarticulation. Analysis indicated a rather stereotyped degree of overlap of nasal air flow during the oral consonant, about 36% of the duration of the oral consonant. Carryover of nasal air flow into the oral consonant appears to reflect mechano-inertial limitations of a sluggish velum.
Subject(s)
Phonetics , Adult , Female , Humans , Pulmonary Ventilation , Speech Articulation TestsABSTRACT
The speech of a five-year-old boy who suffered a profound hearing loss following meningitis was sampled at two-week intervals for nine months. Speech samples were subjected to phonetic transcription, spectrographic analysis, and intelligibility testing. Immediately post-trauma, the child displayed slightly slower, F0 elevated, acoustically intense speech in which phonemic distortion and syllabification of consonants occurred occasionally; single word intelligibility was depressed below normal between 20-30%. By the 18th week, a sudden decline in intelligibility, increasing monotony of pitch, and a pattern of strongly emphatic, prolonged, aspirated, syllabified, and increasingly distorted consonants were manifest. At year's end, the child's speech bore some resemblance to the speech of the deaf in terms of suprasegmentals, intonation, and intelligibility, but differed because the child rarely, if ever deleted speech sounds or diphthongized vowels strongly. It is speculated that phonetic processes such as diphthongization, syllabification, and prolonged duration may be strategies for enhancing feedback during speech.
Subject(s)
Hearing Loss, Sudden/complications , Voice Disorders/etiology , Child, Preschool , Hearing Loss, Sudden/etiology , Humans , Male , Meningitis, Haemophilus/complications , Sound Spectrography , Speech , Speech IntelligibilityABSTRACT
(Right to Left) and (Left to Right) coarticulation of vowels with stop consonants in VCV logotemes was studied before and during oral anesthetization. Correlational analysis revealed that neither LR nor RL coarticulation was markedly reduced in extent, suggesting that orosensory feedback was not crucial to control of coarticulation. This lends support to the notion of central control of coarticulation.
Subject(s)
Lip/physiology , Phonetics , Speech Articulation Tests , Speech Production Measurement , Tongue/physiology , Adult , Feedback , Humans , Lidocaine/administration & dosage , Male , Speech Intelligibility/drug effects , Tongue/drug effectsSubject(s)
Deafness/physiopathology , Respiration , Speech , Adolescent , Adult , Humans , Male , Phonation , Phonetics , Pulmonary VentilationABSTRACT
The interaction of the head with a sound impinging upon it has a direct effect on the sound as it is seen at the port of the hearing aid microphone. While other investigators have evaluated this effect in terms of changes in the frequency response of the hearing aid, this investigation sought to evaluate the significance of the effect in terms of the intelligibility of speech presented in a noisy background. The three typical locations of a hearing aid microphone were simulated with a high-fidelity probe-tube microphone placed around the right ear of KEMAR. The locations were: over the ear, behind the ear, and in the ear. An earmold was kept in the ear at all times. Speech and noise were presented to the microphone and recorded on tape for presentation to normally hearing subjects. The results indicated that so long as the hearing aid microphone is located on the head, around the ear, no one location is better than any other for speech intelligibility.
Subject(s)
Hearing Aids , Noise , Speech Perception , Acoustics , Adult , Equipment Design , Female , HumansABSTRACT
Eight normally developing preschool children manifesting incomplete mastery of /r/ articulation repeated three times at four week intervals sentences containing the allophones [3 r 2] embedded in various consonantal contexts. Two judges evaluated /r/ allophone production as correct or incorrect. Children whose /r/ production improved showed greater success with [3 r] articulation than with [2]. Results indicate that normally developing children may be distinguished from more slowly developing children on the basis of differential success with production of various /r/ allophones.
Subject(s)
Child Language , Language Development , Phonetics , Child, Preschool , Humans , Speech/physiologyABSTRACT
The /s/ production of six /s/-defective children and two normal controls were subjected to spectrographic analysis. Articulatorily, two of the children were dentalizers, two had lateral emission of friction, and two were of an "other" type. Results show for normals an /s/ spectrum which is compact (5-11 kHz), powerful, and dominated by strong, sharp spectral peaks at 6 and 10 kHz. Their spectra were context sensitive. Dental subjects showed flatter, less peaked, higher frequency (6-12 kHz), and less intense noise spectrum, which was not so context sensitive. Lateral /s/ subjects showed a broad 4-9 or 4-10 kHz spectrum characterized by somewhat smaller, more numerous peaks, and a lower cutoff frequency (about 4kHz) than for normals. The "other /s/ defectives varied very widely, so that no consistent pattern emerged. The acoustic data are then discussed in terms of the articulation of varieties of friction noise.