Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Acad Radiol ; 29(9): 1350-1358, 2022 09.
Article in English | MEDLINE | ID: mdl-34649780

ABSTRACT

RATIONALE AND OBJECTIVES: To compare the performance of pneumothorax deep learning detection models trained with radiologist versus natural language processing (NLP) labels on the NIH ChestX-ray14 dataset. MATERIALS AND METHODS: The ChestX-ray14 dataset consisted of 112,120 frontal chest radiographs with 5302 positive and 106, 818 negative labels for pneumothorax using NLP (dataset A). All 112,120 radiographs were also inspected by 4 radiologists leaving a visually confirmed set of 5,138 positive and 104,751 negative for pneumothorax (dataset B). Datasets A and B were used independently to train 3 convolutional neural network (CNN) architectures (ResNet-50, DenseNet-121 and EfficientNetB3). All models' area under the receiver operating characteristic curve (AUC) were evaluated with the official NIH test set and an external test set of 525 chest radiographs from our emergency department. RESULTS: There were significantly higher AUCs on the NIH internal test set for CNN models trained with radiologist vs NLP labels across all architectures. AUCs for the NLP/radiologist-label models were 0.838 (95%CI:0.830, 0.846)/0.881 (95%CI:0.873,0.887) for ResNet-50 (p = 0.034), 0.839 (95%CI:0.831,0.847)/0.880 (95%CI:0.873,0.887) for DenseNet-121, and 0.869 (95%CI: 0.863,0.876)/0.943 (95%CI: 0.939,0.946) for EfficientNetB3 (p ≤0.001). Evaluation with the external test set also showed higher AUCs (p <0.001) for the CNN models trained with radiologist versus NLP labels across all architectures. The AUCs for the NLP/radiologist-label models were 0.686 (95%CI:0.632,0.740)/0.806 (95%CI:0.758,0.854) for ResNet-50, 0.736 (95%CI:0.686, 0.787)/0.871 (95%CI:0.830,0.912) for DenseNet-121, and 0.822 (95%CI: 0.775,0.868)/0.915 (95%CI: 0.882,0.948) for EfficientNetB3. CONCLUSION: We demonstrated improved performance and generalizability of pneumothorax detection deep learning models trained with radiologist labels compared to models trained with NLP labels.


Subject(s)
Deep Learning , Pneumothorax , Humans , Natural Language Processing , Pneumothorax/diagnostic imaging , Radiography, Thoracic , Radiologists , Retrospective Studies
2.
Radiol Artif Intell ; 3(4): e200190, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34350409

ABSTRACT

PURPOSE: To assess the generalizability of a deep learning pneumothorax detection model on datasets from multiple external institutions and examine patient and acquisition factors that might influence performance. MATERIALS AND METHODS: In this retrospective study, a deep learning model was trained for pneumothorax detection by merging two large open-source chest radiograph datasets: ChestX-ray14 and CheXpert. It was then tested on six external datasets from multiple independent institutions (labeled A-F) in a retrospective case-control design (data acquired between 2016 and 2019 from institutions A-E; institution F consisted of data from the MIMIC-CXR dataset). Performance on each dataset was evaluated by using area under the receiver operating characteristic curve (AUC) analysis, sensitivity, specificity, and positive and negative predictive values, with two radiologists in consensus being used as the reference standard. Patient and acquisition factors that influenced performance were analyzed. RESULTS: The AUCs for pneumothorax detection for external institutions A-F were 0.91 (95% CI: 0.88, 0.94), 0.97 (95% CI: 0.94, 0.99), 0.91 (95% CI: 0.85, 0.97), 0.98 (95% CI: 0.96, 1.0), 0.97 (95% CI: 0.95, 0.99), and 0.92 (95% CI: 0.90, 0.95), respectively, compared with the internal test AUC of 0.93 (95% CI: 0.92, 0.93). The model had lower performance for small compared with large pneumothoraces (AUC, 0.88 [95% CI: 0.85, 0.91] vs AUC, 0.96 [95% CI: 0.95, 0.97]; P = .005). Model performance was not different when a chest tube was present or absent on the radiographs (AUC, 0.95 [95% CI: 0.92, 0.97] vs AUC, 0.94 [95% CI: 0.92, 0.05]; P > .99). CONCLUSION: A deep learning model trained with a large volume of data on the task of pneumothorax detection was able to generalize well to multiple external datasets with patient demographics and technical parameters independent of the training data.Keywords: Thorax, Computer Applications-Detection/DiagnosisSee also commentary by Jacobson and Krupinski in this issue.Supplemental material is available for this article.©RSNA, 2021.

4.
J Surg Res ; 221: 232-245, 2018 01.
Article in English | MEDLINE | ID: mdl-29229134

ABSTRACT

BACKGROUND: The use of live and cadaveric animal models in surgical training is well established as a means of teaching and improving surgical skill in a controlled setting. We aim to review, evaluate, and summarize the models published in the literature that are applicable to Plastic Surgery training. MATERIALS AND METHODS: A PubMed search for keywords relating to animal models in Plastic Surgery and the associated procedures was conducted. Animal models that had cross over between specialties such as microsurgery with Neurosurgery and pinnaplasty with ear, nose, and throat surgery were included as they were deemed to be relevant to our training curriculum. A level of evidence and recommendation assessment was then given to each surgical model. RESULTS: Our review found animal models applicable to plastic surgery training in four major categories namely-microsurgery training, flap raising, facial surgery, and hand surgery. Twenty-four separate articles described various methods of practicing microsurgical techniques on different types of animals. Fourteen different articles each described various methods of conducting flap-based procedures which consisted of either local or perforator flap dissection. Eight articles described different models for practicing hand surgery techniques. Finally, eight articles described animal models that were used for head and neck procedures. CONCLUSIONS: A comprehensive summary of animal models related to plastic surgery training has been compiled. Cadaveric animal models provide a readily available introduction to many procedures and ought to be used instead of live models when feasible.


Subject(s)
Models, Animal , Plastic Surgery Procedures , Animals , Surgical Flaps
SELECTION OF CITATIONS
SEARCH DETAIL
...