Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Cleft Palate Craniofac J ; : 10556656241258687, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38860332

ABSTRACT

OBJECTIVE: A Growth and Feeding Clinic (GFC) focused on early intervention around feeding routines in patients with cleft lip and/or palate (CL/P) was implemented. DESIGN: This study assessed the effect of preoperative feeding interventions provided by the GFC. SETTING: Tertiary academic center. METHODS: This study evaluated patients with CL/P who were cared for by the GFC and a control group of patients with CL/P. Weight-for-age (WFA) Z-score of less than -2.00 was used as a cutoff to classify patients who were underweight during the preoperative period. MAIN OUTCOME MEASURE: The number of underweight patients who were able to reach normal weight by the time of their cleft lip repair was used as the primary outcome measure. RESULTS: Within both the GFC and control groups, 25% of patients with CL/P were underweight as determined by WFA Z-score. GFC patients who were underweight received more clinic visits (P < .001) and GFC interventions (P < .001) compared to GFC patients who were normal weight. At the time of cleft lip surgery, 64.1% of GFC underweight patients were normal weight compared to 31.8% of control group underweight patients (P = .0187). CONCLUSION: This study showed that multidisciplinary care provided by the GFC was able to target preoperative nutritional interventions to the highest-risk patients, resulting in double the percentage of patients who were of normal weight at the time of their cleft lip repair. These results provide objective proof supporting the assertion that multidisciplinary team care of the infant with cleft leads to measurable improvement in outcomes.

2.
J Craniomaxillofac Surg ; 46(12): 2022-2026, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30420149

ABSTRACT

An automated cleft speech evaluator, available globally, has the potential to dramatically improve quality of life for children born with a cleft palate, as well as eliminating bias for outcome collaboration between cleft centers in the developed world. Our automated cleft speech evaluator interprets resonance and articulatory cleft speech errors to distinguish between normal speech, velopharyngeal dysfunction and articulatory speech errors. This article describes a significant update in the efficiency of our evaluator. Speech samples from our Craniofacial Team clinic were recorded and rated independently by two experienced speech pathologists: 60 patients were used to train the evaluator, and the evaluator was tested on the 13 subsequent patients. All sounds from 6 of the CAPS-A-AM sentences were used to train the system. The inter-speech pathologist agreement rate was 79%. Our cleft speech evaluator achieved 85% agreement with the combined speech pathologist rating, compared with 65% agreement using the previous training model. This automated cleft speech evaluator demonstrates good accuracy despite low training numbers. We anticipate that as the training samples increase, the accuracy will match human listeners.


Subject(s)
Cleft Palate/physiopathology , Speech Intelligibility , Velopharyngeal Insufficiency/physiopathology , Adolescent , Child , Child, Preschool , Female , Humans , Male , Markov Chains
3.
J Craniomaxillofac Surg ; 45(8): 1268-1271, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28602633

ABSTRACT

Perceptual evaluation remains the gold-standard evaluation of cleft speech, but with any human interpretation, there can be bias. Eliminating bias, allowing comparison of speech data between units, is labor and time intensive. Globally, there is a shortage of listeners. We have developed a computer learning system to evaluate cleft speech. Our automated cleft speech evaluator interprets resonance and articulatory cleft speech errors. Speech recognition engines typically ignore voice characteristics and speech errors of the speaker, but in cleft speech evaluation, these features are paramount. Our evaluator targets these to distinguish between normal speech, velopharyngeal dysfunction and articulatory speech errors. Speech samples from our Craniofacial Team clinic were recorded and rated independently by two experienced speech pathologists: 60 patients were used to train the evaluator, and the evaluator was tested on the 13 subsequent patients. The inter-speech pathologist agreement rate was 79%. Our cleft speech evaluator achieved 77% on its best sentence and a median of 65% for all sentences. This automated cleft speech evaluator has applications for global cleft speech evaluation when no speech pathologist is available, and for unbiased evaluation, facilitating collaboration between teams. We anticipate that as the training samples increase, the accuracy will match human listeners.


Subject(s)
Cleft Lip/physiopathology , Cleft Palate/physiopathology , Speech Recognition Software , Speech , Adolescent , Child , Child, Preschool , Female , Humans , Male , Speech Intelligibility
SELECTION OF CITATIONS
SEARCH DETAIL
...