Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Bioeng Biotechnol ; 11: 1326706, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38292305

RESUMO

Purpose: To construct a deep learning knowledge distillation framework exploring the utilization of MRI alone or combing with distilled Arthroscopy information for meniscus tear detection. Methods: A database of 199 paired knee Arthroscopy-MRI exams was used to develop a multimodal teacher network and an MRI-based student network, which used residual neural networks architectures. A knowledge distillation framework comprising the multimodal teacher network T and the monomodal student network S was proposed. We optimized the loss functions of mean squared error (MSE) and cross-entropy (CE) to enable the student network S to learn arthroscopic information from the teacher network T through our deep learning knowledge distillation framework, ultimately resulting in a distilled student network S T. A coronal proton density (PD)-weighted fat-suppressed MRI sequence was used in this study. Fivefold cross-validation was employed, and the accuracy, sensitivity, specificity, F1-score, receiver operating characteristic (ROC) curves and area under the receiver operating characteristic curve (AUC) were used to evaluate the medial and lateral meniscal tears detection performance of the models, including the undistilled student model S, the distilled student model S T and the teacher model T. Results: The AUCs of the undistilled student model S, the distilled student model S T, the teacher model T for medial meniscus (MM) tear detection and lateral meniscus (LM) tear detection are 0.773/0.672, 0.792/0.751 and 0.834/0.746, respectively. The distilled student model S T had higher AUCs than the undistilled model S. After undergoing knowledge distillation processing, the distilled student model demonstrated promising results, with accuracy (0.764/0.734), sensitivity (0.838/0.661), and F1-score (0.680/0.754) for both medial and lateral tear detection better than the undistilled one with accuracy (0.734/0.648), sensitivity (0.733/0.607), and F1-score (0.620/0.673). Conclusion: Through the knowledge distillation framework, the student model S based on MRI benefited from the multimodal teacher model T and achieved an improved meniscus tear detection performance.

2.
Front Bioeng Biotechnol ; 10: 1024527, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36246358

RESUMO

Purpose: To develop and evaluate a deep learning-based method to localize and classify anterior cruciate ligament (ACL) ruptures on knee MR images by using arthroscopy as the reference standard. Methods: We proposed a fully automated ACL rupture localization system to localize and classify ACL ruptures. The classification of ACL ruptures was based on the projection coordinates of the ACL rupture point on the line connecting the center coordinates of the femoral and tibial footprints. The line was divided into three equal parts and the position of the projection coordinates indicated the classification of the ACL ruptures (femoral side, middle and tibial side). In total, 85 patients (mean age: 27; male: 56) who underwent ACL reconstruction surgery under arthroscopy were included. Three clinical readers evaluated the datasets separately and their diagnostic performances were compared with those of the model. The performance metrics included the accuracy, error rate, sensitivity, specificity, precision, and F1-score. A one-way ANOVA was used to evaluate the performance of the convolutional neural networks (CNNs) and clinical readers. Intraclass correlation coefficients (ICC) were used to assess interobserver agreement between the clinical readers. Results: The accuracy of ACL localization was 3.77 ± 2.74 and 4.68 ± 3.92 (mm) for three-dimensional (3D) and two-dimensional (2D) CNNs, respectively. There was no significant difference in the ACL rupture location performance between the 3D and 2D CNNs or among the clinical readers (Accuracy, p < 0.01). The 3D CNNs performed best among the five evaluators in classifying the femoral side (sensitivity of 0.86 and specificity of 0.79), middle side (sensitivity of 0.71 and specificity of 0.84) and tibial side ACL rupture (sensitivity of 0.71 and specificity of 0.99), and the overall accuracy for sides classifying of ACL rupture achieved 0.79. Conclusion: The proposed deep learning-based model achieved high diagnostic performances in locating and classifying ACL fractures on knee MR images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...