Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Acad Radiol ; 29(9): 1350-1358, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-34649780

RESUMO

RATIONALE AND OBJECTIVES: To compare the performance of pneumothorax deep learning detection models trained with radiologist versus natural language processing (NLP) labels on the NIH ChestX-ray14 dataset. MATERIALS AND METHODS: The ChestX-ray14 dataset consisted of 112,120 frontal chest radiographs with 5302 positive and 106, 818 negative labels for pneumothorax using NLP (dataset A). All 112,120 radiographs were also inspected by 4 radiologists leaving a visually confirmed set of 5,138 positive and 104,751 negative for pneumothorax (dataset B). Datasets A and B were used independently to train 3 convolutional neural network (CNN) architectures (ResNet-50, DenseNet-121 and EfficientNetB3). All models' area under the receiver operating characteristic curve (AUC) were evaluated with the official NIH test set and an external test set of 525 chest radiographs from our emergency department. RESULTS: There were significantly higher AUCs on the NIH internal test set for CNN models trained with radiologist vs NLP labels across all architectures. AUCs for the NLP/radiologist-label models were 0.838 (95%CI:0.830, 0.846)/0.881 (95%CI:0.873,0.887) for ResNet-50 (p = 0.034), 0.839 (95%CI:0.831,0.847)/0.880 (95%CI:0.873,0.887) for DenseNet-121, and 0.869 (95%CI: 0.863,0.876)/0.943 (95%CI: 0.939,0.946) for EfficientNetB3 (p ≤0.001). Evaluation with the external test set also showed higher AUCs (p <0.001) for the CNN models trained with radiologist versus NLP labels across all architectures. The AUCs for the NLP/radiologist-label models were 0.686 (95%CI:0.632,0.740)/0.806 (95%CI:0.758,0.854) for ResNet-50, 0.736 (95%CI:0.686, 0.787)/0.871 (95%CI:0.830,0.912) for DenseNet-121, and 0.822 (95%CI: 0.775,0.868)/0.915 (95%CI: 0.882,0.948) for EfficientNetB3. CONCLUSION: We demonstrated improved performance and generalizability of pneumothorax detection deep learning models trained with radiologist labels compared to models trained with NLP labels.


Assuntos
Aprendizado Profundo , Pneumotórax , Humanos , Processamento de Linguagem Natural , Pneumotórax/diagnóstico por imagem , Radiografia Torácica , Radiologistas , Estudos Retrospectivos
2.
Radiol Artif Intell ; 3(4): e200190, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34350409

RESUMO

PURPOSE: To assess the generalizability of a deep learning pneumothorax detection model on datasets from multiple external institutions and examine patient and acquisition factors that might influence performance. MATERIALS AND METHODS: In this retrospective study, a deep learning model was trained for pneumothorax detection by merging two large open-source chest radiograph datasets: ChestX-ray14 and CheXpert. It was then tested on six external datasets from multiple independent institutions (labeled A-F) in a retrospective case-control design (data acquired between 2016 and 2019 from institutions A-E; institution F consisted of data from the MIMIC-CXR dataset). Performance on each dataset was evaluated by using area under the receiver operating characteristic curve (AUC) analysis, sensitivity, specificity, and positive and negative predictive values, with two radiologists in consensus being used as the reference standard. Patient and acquisition factors that influenced performance were analyzed. RESULTS: The AUCs for pneumothorax detection for external institutions A-F were 0.91 (95% CI: 0.88, 0.94), 0.97 (95% CI: 0.94, 0.99), 0.91 (95% CI: 0.85, 0.97), 0.98 (95% CI: 0.96, 1.0), 0.97 (95% CI: 0.95, 0.99), and 0.92 (95% CI: 0.90, 0.95), respectively, compared with the internal test AUC of 0.93 (95% CI: 0.92, 0.93). The model had lower performance for small compared with large pneumothoraces (AUC, 0.88 [95% CI: 0.85, 0.91] vs AUC, 0.96 [95% CI: 0.95, 0.97]; P = .005). Model performance was not different when a chest tube was present or absent on the radiographs (AUC, 0.95 [95% CI: 0.92, 0.97] vs AUC, 0.94 [95% CI: 0.92, 0.05]; P > .99). CONCLUSION: A deep learning model trained with a large volume of data on the task of pneumothorax detection was able to generalize well to multiple external datasets with patient demographics and technical parameters independent of the training data.Keywords: Thorax, Computer Applications-Detection/DiagnosisSee also commentary by Jacobson and Krupinski in this issue.Supplemental material is available for this article.©RSNA, 2021.

3.
Quant Imaging Med Surg ; 11(6): 2775-2779, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34079741

RESUMO

Advances in information technology have improved radiologists' abilities to perform an increasing variety of targeted diagnostic exams. However, due to a growing demand for imaging from an aging population, the number of exams could soon exceed the number of radiologists available to read them. However, artificial intelligence has recently resounding success in several case studies involving the interpretation of radiologic exams. As such, the integration of AI with standard diagnostic imaging practices to revolutionize medical care has been proposed, with the ultimate goal being the replacement of human radiologists with AI 'radiologists'. However, the complexity of medical tasks is often underestimated, and many proponents are oblivious to the limitations of AI algorithms. In this paper, we review the hype surrounding AI in medical imaging and the changing opinions over the years, ultimately describing AI's shortcomings. Nonetheless, we believe that AI has the potential to assist radiologists. Therefore, we discuss ways AI can increase a radiologist's efficiency by integrating it into the standard workflow.

4.
Quant Imaging Med Surg ; 11(2): 852-857, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33532283

RESUMO

Despite the overall success of using artificial intelligence (AI) to assist radiologists in performing computer-aided patient diagnosis, it remains challenging to build good models with small datasets at individual sites. Because many medical images do not come with proper labelling for training, this requires radiologists to perform strenuous labelling work and to prepare the dataset for training. Placing such demands on radiologists is unsustainable, given the ever-increasing number of medical images taken each year. We propose an alternative solution using a relatively new learning framework. This framework, called federated learning, allows individual sites to train a global model in a collaborative effort. Federated learning involves aggregating training results from multiple sites to create a global model without directly sharing datasets. This ensures that patient privacy is maintained across sites. Furthermore, the added supervision obtained from the results of partnering sites improves the global model's overall detection abilities. This alleviates the issue of insufficient supervision when training AI models with small datasets. Lastly, we also address the major challenges of adopting federated learning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...