Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Neurosci ; 17: 1302132, 2023.
Article in English | MEDLINE | ID: mdl-38130696

ABSTRACT

Introduction: Post-stroke dysphagia is common and associated with significant morbidity and mortality, rendering bedside screening of significant clinical importance. Using voice as a biomarker coupled with deep learning has the potential to improve patient access to screening and mitigate the subjectivity associated with detecting voice change, a component of several validated screening protocols. Methods: In this single-center study, we developed a proof-of-concept model for automated dysphagia screening and evaluated the performance of this model on training and testing cohorts. Patients were admitted to a comprehensive stroke center, where primary English speakers could follow commands without significant aphasia and participated on a rolling basis. The primary outcome was classification either as a pass or fail equivalent using a dysphagia screening test as a label. Voice data was recorded from patients who spoke a standardized set of vowels, words, and sentences from the National Institute of Health Stroke Scale. Seventy patients were recruited and 68 were included in the analysis, with 40 in training and 28 in testing cohorts, respectively. Speech from patients was segmented into 1,579 audio clips, from which 6,655 Mel-spectrogram images were computed and used as inputs for deep-learning models (DenseNet and ConvNext, separately and together). Clip-level and participant-level swallowing status predictions were obtained through a voting method. Results: The models demonstrated clip-level dysphagia screening sensitivity of 71% and specificity of 77% (F1 = 0.73, AUC = 0.80 [95% CI: 0.78-0.82]). At the participant level, the sensitivity and specificity were 89 and 79%, respectively (F1 = 0.81, AUC = 0.91 [95% CI: 0.77-1.05]). Discussion: This study is the first to demonstrate the feasibility of applying deep learning to classify vocalizations to detect post-stroke dysphagia. Our findings suggest potential for enhancing dysphagia screening in clinical settings. https://github.com/UofTNeurology/masa-open-source.

2.
Med Image Anal ; 88: 102865, 2023 08.
Article in English | MEDLINE | ID: mdl-37331241

ABSTRACT

Cranial implants are commonly used for surgical repair of craniectomy-induced skull defects. These implants are usually generated offline and may require days to weeks to be available. An automated implant design process combined with onsite manufacturing facilities can guarantee immediate implant availability and avoid secondary intervention. To address this need, the AutoImplant II challenge was organized in conjunction with MICCAI 2021, catering for the unmet clinical and computational requirements of automatic cranial implant design. The first edition of AutoImplant (AutoImplant I, 2020) demonstrated the general capabilities and effectiveness of data-driven approaches, including deep learning, for a skull shape completion task on synthetic defects. The second AutoImplant challenge (i.e., AutoImplant II, 2021) built upon the first by adding real clinical craniectomy cases as well as additional synthetic imaging data. The AutoImplant II challenge consisted of three tracks. Tracks 1 and 3 used skull images with synthetic defects to evaluate the ability of submitted approaches to generate implants that recreate the original skull shape. Track 3 consisted of the data from the first challenge (i.e., 100 cases for training, and 110 for evaluation), and Track 1 provided 570 training and 100 validation cases aimed at evaluating skull shape completion algorithms at diverse defect patterns. Track 2 also made progress over the first challenge by providing 11 clinically defective skulls and evaluating the submitted implant designs on these clinical cases. The submitted designs were evaluated quantitatively against imaging data from post-craniectomy as well as by an experienced neurosurgeon. Submissions to these challenge tasks made substantial progress in addressing issues such as generalizability, computational efficiency, data augmentation, and implant refinement. This paper serves as a comprehensive summary and comparison of the submissions to the AutoImplant II challenge. Codes and models are available at https://github.com/Jianningli/Autoimplant_II.


Subject(s)
Prostheses and Implants , Skull , Humans , Skull/diagnostic imaging , Skull/surgery , Craniotomy/methods , Head
3.
Bone ; 167: 116616, 2023 02.
Article in English | MEDLINE | ID: mdl-36402366

ABSTRACT

µCT images are commonly analysed to assess changes in bone density and microstructure in preclinical murine models. Several platforms provide automated analysis of bone microstructural parameters from volumetric regions of interest (ROI). However, segmentation of the regions of subchondral bone to create the volumetric ROIs remains a manual and time-consuming task. This study aimed to develop an automated end-to-end pipeline, combining segmentation and microstructural analysis, to evaluate subchondral bone in the mouse proximal knee. METHODS: A segmented dataset of µCT scans from 62 knees (healthy and arthritic) from 10-week male C57BL/6 mice was used to train a U-Net type architecture to automate segmentation of the subchondral trabecular bone. These segmentations were used in tandem with the original scans as input for microstructural analysis along with thresholded trabecular bone. Manually and U-Net segmented ROIs were fed into two available pipelines for microstructural analysis: the ITKBoneMorphometry library and CTan (SKYSCAN). Outcome parameters were compared between pipelines, including: bone volume (BV), total volume (TV), BV/TV, trabecular number (TbN), trabecular thickness (TbTh), trabecular separation (TbSp), and bone surface density (BSBV). RESULTS: There was good agreement for all bone measures comparing the manual and U-Net pipelines utilizing ITK (R = 0.88-0.98) and CTAn (R = 0.91-0.98). ITK and CTAn showed good agreement for BV, TV, BV/TV, TbTh and BSBV (R = 0.9-0.98). However, limited agreement was seen between TbN (R = 0.73) and TbSb (R = 0.59) due to methodological differences in how spacing is evaluated. Microstructural parameters generated from manual and automatic segmentations showed high correlation across all measures. Using the CTAn pipeline yielded strong R2 values (0.83-0.96) and very strong agreement based on ICC (0.90-0.98). The ITK pipeline yielded similarly high R2 values (0.91-0.96, except for TbN (0.77)), and ICC values (0.88-0.98). The automated segmentations yield lower average values for BV, TV and BV/TV (ranging from 14 % to 6.3 %), but differences were not found to be influenced by the mean ROI values. CONCLUSIONS: This integrated pipeline seamlessly automated both segmentation and quantification of the proximal tibia subchondral bone microstructure. This automated pipeline allows the analysis of large volumes of data, and its open-source nature may enable the standardization of microstructural analysis of trabecular bone across different research groups.


Subject(s)
Bone Density , Bone and Bones , Male , Mice , Animals , Mice, Inbred C57BL , Bone and Bones/diagnostic imaging , Tibia/diagnostic imaging , Knee Joint/diagnostic imaging , X-Ray Microtomography/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...