Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Speech Lang Hear Res ; 52(5): 1334-52, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19717656

ABSTRACT

PURPOSE: In this study, the authors examined whether rhythm metrics capable of distinguishing languages with high and low temporal stress contrast also can distinguish among control and dysarthric speakers of American English with perceptually distinct rhythm patterns. Methods Acoustic measures of vocalic and consonantal segment durations were obtained for speech samples from 55 speakers across 5 groups (hypokinetic, hyperkinetic, flaccid-spastic, ataxic dysarthrias, and controls). Segment durations were used to calculate standard and new rhythm metrics. Discriminant function analyses (DFAs) were used to determine which sets of predictor variables (rhythm metrics) best discriminated between groups (control vs. dysarthrias; and among the 4 dysarthrias). A cross-validation method was used to test the robustness of each original DFA. RESULTS: The majority of classification functions were more than 80% successful in classifying speakers into their appropriate group. New metrics that combined successive vocalic and consonantal segments emerged as important predictor variables. DFAs pitting each dysarthria group against the combined others resulted in unique constellations of predictor variables that yielded high levels of classification accuracy. CONCLUSIONS: This study confirms the ability of rhythm metrics to distinguish control speech from dysarthrias and to discriminate dysarthria subtypes. Rhythm metrics show promise for use as a rational and objective clinical tool.


Subject(s)
Dysarthria/diagnosis , Dysarthria/physiopathology , Speech Articulation Tests , Speech/physiology , Analysis of Variance , Ataxia/diagnosis , Ataxia/physiopathology , Humans , Language , Predictive Value of Tests , Speech Acoustics , Time Factors
2.
J Acoust Soc Am ; 122(6): 3678-87, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18247775

ABSTRACT

It has been posited that the role of prosody in lexical segmentation is elevated when the speech signal is degraded or unreliable. Using predictions from Cutler and Norris' [J. Exp. Psychol. Hum. Percept. Perform. 14, 113-121 (1988)] metrical segmentation strategy hypothesis as a framework, this investigation examined how individual suprasegmental and segmental cues to syllabic stress contribute differentially to the recognition of strong and weak syllables for the purpose of lexical segmentation. Syllabic contrastivity was reduced in resynthesized phrases by systematically (i) flattening the fundamental frequency (F0) contours, (ii) equalizing vowel durations, (iii) weakening strong vowels, (iv) combining the two suprasegmental cues, i.e., F0 and duration, and (v) combining the manipulation of all cues. Results indicated that, despite similar decrements in overall intelligibility, F0 flattening and the weakening of strong vowels had a greater impact on lexical segmentation than did equalizing vowel duration. Both combined-cue conditions resulted in greater decrements in intelligibility, but with no additional negative impact on lexical segmentation. The results support the notion of F0 variation and vowel quality as primary conduits for stress-based segmentation and suggest that the effectiveness of stress-based segmentation with degraded speech must be investigated relative to the suprasegmental and segmental impoverishments occasioned by each particular degradation.


Subject(s)
Cues , Pitch Perception , Semantics , Speech Acoustics , Speech Intelligibility , Speech Perception , Voice Quality , Acoustic Stimulation , Adult , Female , Humans , Male , Middle Aged , Sound Spectrography , Time Factors
3.
Cereb Cortex ; 15(10): 1621-31, 2005 Oct.
Article in English | MEDLINE | ID: mdl-15703256

ABSTRACT

The temporal lobe in the left hemisphere has long been implicated in the perception of speech sounds. Little is known, however, regarding the specific function of different temporal regions in the analysis of the speech signal. Here we show that an area extending along the left middle and anterior superior temporal sulcus (STS) is more responsive to familiar consonant-vowel syllables during an auditory discrimination task than to comparably complex auditory patterns that cannot be associated with learned phonemic categories. In contrast, areas in the dorsal superior temporal gyrus bilaterally, closer to primary auditory cortex, are activated to the same extent by the phonemic and nonphonemic sounds. Thus, the left middle/anterior STS appears to play a role in phonemic perception. It may represent an intermediate stage of processing in a functional pathway linking areas in the bilateral dorsal superior temporal gyrus, presumably involved in the analysis of physical features of speech and other complex non-speech sounds, to areas in the left anterior STS and middle temporal gyrus that are engaged in higher-level linguistic processes.


Subject(s)
Speech Perception/physiology , Temporal Lobe/physiology , Acoustic Stimulation , Adult , Brain Mapping , Discrimination, Psychological/physiology , Female , Functional Laterality/physiology , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Middle Aged , Oxygen/blood , Speech
4.
J Acoust Soc Am ; 112(6): 3022-30, 2002 Dec.
Article in English | MEDLINE | ID: mdl-12509024

ABSTRACT

This study is the third in a series that has explored the source of intelligibility decrement in dysarthria by jointly considering signal characteristics and the cognitive-perceptual processes employed by listeners. A paradigm of lexical boundary error analysis was used to examine this interface by manipulating listener constraints with a brief familiarization procedure. If familiarization allows listeners to extract relevant segmental and suprasegmental information from dysarthric speech, they should obtain higher intelligibility scores than nonfamiliarized listeners, and their lexical boundary error patterns should approximate those obtained in misperceptions of normal speech. Listeners transcribed phrases produced by speakers with either hypokinetic or ataxic dysarthria after being familiarized with other phrases produced by these speakers. Data were compared to those of nonfamiliarized listeners [Liss et al., J. Acoust. Soc. Am. 107, 3415-3424 (2000)]. The familiarized groups obtained higher intelligibility scores than nonfamiliarized groups, and the effects were greater when the dysarthria type of the familiarization procedure matched the dysarthria type of the transcription task. Remarkably, no differences in lexical boundary error patterns were discovered between the familiarized and nonfamiliarized groups. Transcribers of the ataxic speech appeared to have difficulty distinguishing strong and weak syllables in spite of the familiarization. Results suggest that intelligibility decrements arise from the perceptual challenges posed by the degraded segmental and suprasegmental aspects of the signal, but that this type of familiarization process may differentially facilitate mapping segmental information onto existing phonological categories.


Subject(s)
Attention , Dysarthria/diagnosis , Habituation, Psychophysiologic , Speech Intelligibility , Speech Perception , Adult , Dysarthria/classification , Dysarthria/psychology , Female , Humans , Male , Phonetics , Speech Acoustics , Speech Intelligibility/classification , Speech Production Measurement
SELECTION OF CITATIONS
SEARCH DETAIL
...