Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Am J Speech Lang Pathol ; 28(3): 1167-1183, 2019 08 09.
Article in English | MEDLINE | ID: mdl-31170355

ABSTRACT

Purpose The aim of the study was to examine how ultrasound visual feedback (UVF) treatment impacts speech sound learning in children with residual speech errors affecting /ɹ/. Method Twelve children, ages 9-14 years, received treatment for vocalic /ɹ/ errors in a multiple-baseline across-subjects design comparing 8 sessions of UVF treatment and 8 sessions of traditional (no-biofeedback) treatment. All participants were exposed to both treatment conditions, with order counterbalanced across participants. To monitor progress, naïve listeners rated the accuracy of vocalic /ɹ/ in untreated words. Results After the first 8 sessions, children who received UVF were judged to produce more accurate vocalic /ɹ/ than those who received traditional treatment. After the second 8 sessions, within-participant comparisons revealed individual variation in treatment response. However, group-level comparisons revealed greater accuracy in children whose treatment order was UVF followed by traditional treatment versus children who received the reverse treatment order. Conclusion On average, 8 sessions of UVF were more effective than 8 sessions of traditional treatment for remediating vocalic /ɹ/ errors. Better outcomes were also observed when UVF was provided in the early rather than later stages of learning. However, there remains a significant individual variation in response to UVF and traditional treatment, and larger group-level studies are needed. Supplemental Material https://doi.org/10.23641/asha.8206640.


Subject(s)
Biofeedback, Psychology/methods , Feedback, Sensory , Speech Sound Disorder/therapy , Speech Therapy/methods , Speech/physiology , Adolescent , Child , Female , Humans , Male , Phonetics , Treatment Outcome
2.
Clin Linguist Phon ; 33(4): 334-348, 2019.
Article in English | MEDLINE | ID: mdl-30199271

ABSTRACT

Speakers of North American English use variable tongue shapes for rhotic sounds. However, quantifying tongue shapes for rhotics can be challenging, and little is known about how tongue shape complexity corresponds to perceptual ratings of rhotic accuracy in children with residual speech sound errors (RSE). In this study, 16 children aged 9-16 with RSE and 14 children with typical speech (TS) development made multiple productions of 'Let Robby cross Church Street'. Midsagittal ultrasound images were collected once for children with TS and twice for children in the RSE group (once after 7 h of speech therapy, then again after another 7 h of therapy). Tongue contours for the rhotics in the four words were traced and quantified using a new metric of tongue shape complexity: the number of inflections. Rhotics were also scored for accuracy by four listeners. During the first assessment, children with RSE had fewer tongue inflections than children with TS. Following 7 h of therapy, there were increases in the number of inflections for the RSE group, with the cluster items cross and Street reaching tongue complexity levels of those with TS. Ratings of rhotic accuracy were correlated with the number of inflections. Therefore, the number of inflections in the tongue, an index of tongue shape complexity, was associated with perceived accuracy of rhotic productions.


Subject(s)
Biofeedback, Psychology , Speech Sound Disorder , Tongue/diagnostic imaging , Adolescent , Child , Female , Humans , Male , Ultrasonography
3.
J Speech Lang Hear Res ; 61(8): 1875-1892, 2018 08 08.
Article in English | MEDLINE | ID: mdl-30073249

ABSTRACT

Purpose: The aim of this study was to explore how the frequency with which ultrasound visual feedback (UVF) is provided during speech therapy affects speech sound learning. Method: Twelve children with residual speech errors affecting /ɹ/ participated in a multiple-baseline across-subjects design with 2 treatment conditions. One condition featured 8 hr of high-frequency UVF (HF; feedback on 89% of trials), whereas the other included 8 hr of lower-frequency UVF (LF; 44% of trials). The order of treatment conditions was counterbalanced across participants. All participants were treated on vocalic /ɹ/. Progress was tracked by measuring generalization on /ɹ/ in untreated words. Results: After the 1st treatment phase, participants who received the HF condition outperformed those who received LF. At the end of the 2-phase treatment, within-participant comparisons showed variability across individual outcomes in both HF and LF conditions. However, a group level analysis of this small sample suggested that participants whose treatment order was HF-LF made larger gains than those whose treatment order was LF-HF. Conclusions: The order HF-LF may represent a preferred order for UVF in speech therapy. This is consistent with empirical work and theoretical arguments suggesting that visual feedback may be particularly beneficial in the early stages of acquiring new speech targets.


Subject(s)
Biofeedback, Psychology/methods , Feedback, Sensory/physiology , Speech Sound Disorder/therapy , Speech Therapy/methods , Ultrasonography/methods , Adolescent , Child , Female , Humans , Learning , Male , Phonetics , Research Design , Speech/physiology , Speech Production Measurement , Speech Sound Disorder/physiopathology
4.
J Vis Exp ; (119)2017 01 03.
Article in English | MEDLINE | ID: mdl-28117824

ABSTRACT

Diagnostic ultrasound imaging has been a common tool in medical practice for several decades. It provides a safe and effective method for imaging structures internal to the body. There has been a recent increase in the use of ultrasound technology to visualize the shape and movements of the tongue during speech, both in typical speakers and in clinical populations. Ultrasound imaging of speech has greatly expanded our understanding of how sounds articulated with the tongue (lingual sounds) are produced. Such information can be particularly valuable for speech-language pathologists. Among other advantages, ultrasound images can be used during speech therapy to provide (1) illustrative models of typical (i.e. "correct") tongue configurations for speech sounds, and (2) a source of insight into the articulatory nature of deviant productions. The images can also be used as an additional source of feedback for clinical populations learning to distinguish their better productions from their incorrect productions, en route to establishing more effective articulatory habits. Ultrasound feedback is increasingly used by scientists and clinicians as both the expertise of the users increases and as the expense of the equipment declines. In this tutorial, procedures are presented for collecting ultrasound images of the tongue in a clinical context. We illustrate these procedures in an extended example featuring one common error sound, American English /r/. Images of correct and distorted /r/ are used to demonstrate (1) how to interpret ultrasound images, (2) how to assess tongue shape during production of speech sounds, (3), how to categorize tongue shape errors, and (4), how to provide visual feedback to elicit a more appropriate and functional tongue shape. We present a sample protocol for using real-time ultrasound images of the tongue for visual feedback to remediate speech sound errors. Additionally, example data are shown to illustrate outcomes with the procedure.


Subject(s)
Speech/physiology , Tongue/diagnostic imaging , Adolescent , Child , Humans , Image Processing, Computer-Assisted , Movement/physiology , Sound , Speech Production Measurement , Tongue/physiology , Ultrasonography , Young Adult
5.
J Acoust Soc Am ; 134(3): 2235-46, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23967953

ABSTRACT

While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones.


Subject(s)
Acoustics , Pattern Recognition, Automated , Phonetics , Signal Processing, Computer-Assisted , Speech Acoustics , Speech Production Measurement , Voice Quality , Feasibility Studies , Humans , Reproducibility of Results , Software Design , Sound Spectrography
6.
Psychophysiology ; 44(5): 671-9, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17608799

ABSTRACT

Most natural sounds are composed of a mixture of frequencies, which activate separate neurons in the tonotopic auditory cortex. Nevertheless, we perceive this mixture as an integrated sound with unique acoustic properties. We used the Mismatch Negativity (MMN), a marker of auditory change detection, to determine whether individual harmonics are represented in sensory memory. The MMN elicited by duration and pitch deviations were compared for harmonic and pure tones. Controlled for acoustic differences between standards and deviants and their relative probabilities, the MMN was larger for harmonic than pure tones for duration but not for pitch deviance. Because the magnitude of the MMN reflects the number of concurrent changes in the acoustic input relative to a preexistent acoustic representation, these results suggest that duration is represented and compared separately for individual frequencies, whereas pitch comparison occurs after integration.


Subject(s)
Auditory Perception/physiology , Memory/physiology , Acoustic Stimulation , Adult , Data Interpretation, Statistical , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Models, Neurological , Pitch Perception/physiology
7.
Pediatrics ; 110(6): 1153-62, 2002 Dec.
Article in English | MEDLINE | ID: mdl-12456913

ABSTRACT

OBJECTIVE: Abnormalities in brain structure, cognition, and behavior have been described in children born prematurely. However, no direct in vivo evidence has yet demonstrated abnormal neural processing in these children. Our aim was to compare brain activity associated with phonologic and semantic processing of language between term and preterm children using functional magnetic resonance imaging (fMRI). METHODS: fMRI scans were acquired during a passive language comprehension task in 26 preterm children at 8 years of age and in 13 term community control children who were comparable in age, sex, maternal education, and minority status. IQ was assessed using a standard measure of intelligence. RESULTS: The pattern of brain activity identified in a semantic processing task in the preterm children closely resembled the pattern of brain activity identified in a phonologic processing task in term controls. The greater this resemblance in the preterm children, the lower their verbal comprehension IQ scores and the poorer their language comprehension during the scanning task. CONCLUSIONS: Preterm children with the poorest language comprehension seemed not to fully engage normal semantic processing pathways in a language comprehension task. These children instead engaged pathways that normal term children used to process meaningless phonologic sounds. Aberrant processing of semantic content in these preterm children may account in part for their lower verbal IQ scores.


Subject(s)
Brain/physiology , Cognition/physiology , Infant, Premature/physiology , Language Development , Magnetic Resonance Imaging , Benzophenones , Brain/anatomy & histology , Child , Female , Follow-Up Studies , Humans , Infant, Newborn , Intelligence Tests , Least-Squares Analysis , Male , Phonetics , Reference Values , Semantics
SELECTION OF CITATIONS
SEARCH DETAIL
...