Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Radiol Artif Intell ; 5(3): e220159, 2023 May.
Article in English | MEDLINE | ID: mdl-37293346

ABSTRACT

Purpose: To develop an efficient deep neural network model that incorporates context from neighboring image sections to detect breast cancer on digital breast tomosynthesis (DBT) images. Materials and Methods: The authors adopted a transformer architecture that analyzes neighboring sections of the DBT stack. The proposed method was compared with two baselines: an architecture based on three-dimensional (3D) convolutions and a two-dimensional model that analyzes each section individually. The models were trained with 5174 four-view DBT studies, validated with 1000 four-view DBT studies, and tested on 655 four-view DBT studies, which were retrospectively collected from nine institutions in the United States through an external entity. Methods were compared using area under the receiver operating characteristic curve (AUC), sensitivity at a fixed specificity, and specificity at a fixed sensitivity. Results: On the test set of 655 DBT studies, both 3D models showed higher classification performance than did the per-section baseline model. The proposed transformer-based model showed a significant increase in AUC (0.88 vs 0.91, P = .002), sensitivity (81.0% vs 87.7%, P = .006), and specificity (80.5% vs 86.4%, P < .001) at clinically relevant operating points when compared with the single-DBT-section baseline. The transformer-based model used only 25% of the number of floating-point operations per second used by the 3D convolution model while demonstrating similar classification performance. Conclusion: A transformer-based deep neural network using data from neighboring sections improved breast cancer classification performance compared with a per-section baseline model and was more efficient than a model using 3D convolutions.Keywords: Breast, Tomosynthesis, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Digital Breast Tomosynthesis, Breast Cancer, Deep Neural Networks, Transformers Supplemental material is available for this article. © RSNA, 2023.

2.
Lancet Digit Health ; 2(3): e138-e148, 2020 03.
Article in English | MEDLINE | ID: mdl-33334578

ABSTRACT

BACKGROUND: Mammography is the current standard for breast cancer screening. This study aimed to develop an artificial intelligence (AI) algorithm for diagnosis of breast cancer in mammography, and explore whether it could benefit radiologists by improving accuracy of diagnosis. METHODS: In this retrospective study, an AI algorithm was developed and validated with 170 230 mammography examinations collected from five institutions in South Korea, the USA, and the UK, including 36 468 cancer positive confirmed by biopsy, 59 544 benign confirmed by biopsy (8827 mammograms) or follow-up imaging (50 717 mammograms), and 74 218 normal. For the multicentre, observer-blinded, reader study, 320 mammograms (160 cancer positive, 64 benign, 96 normal) were independently obtained from two institutions. 14 radiologists participated as readers and assessed each mammogram in terms of likelihood of malignancy (LOM), location of malignancy, and necessity to recall the patient, first without and then with assistance of the AI algorithm. The performance of AI and radiologists was evaluated in terms of LOM-based area under the receiver operating characteristic curve (AUROC) and recall-based sensitivity and specificity. FINDINGS: The AI standalone performance was AUROC 0·959 (95% CI 0·952-0·966) overall, and 0·970 (0·963-0·978) in the South Korea dataset, 0·953 (0·938-0·968) in the USA dataset, and 0·938 (0·918-0·958) in the UK dataset. In the reader study, the performance level of AI was 0·940 (0·915-0·965), significantly higher than that of the radiologists without AI assistance (0·810, 95% CI 0·770-0·850; p<0·0001). With the assistance of AI, radiologists' performance was improved to 0·881 (0·850-0·911; p<0·0001). AI was more sensitive to detect cancers with mass (53 [90%] vs 46 [78%] of 59 cancers detected; p=0·044) or distortion or asymmetry (18 [90%] vs ten [50%] of 20 cancers detected; p=0·023) than radiologists. AI was better in detection of T1 cancers (73 [91%] vs 59 [74%] of 80; p=0·0039) or node-negative cancers (104 [87%] vs 88 [74%] of 119; p=0·0025) than radiologists. INTERPRETATION: The AI algorithm developed with large-scale mammography data showed better diagnostic performance in breast cancer detection compared with radiologists. The significant improvement in radiologists' performance when aided by AI supports application of AI to mammograms as a diagnostic support tool. FUNDING: Lunit.


Subject(s)
Artificial Intelligence , Breast Neoplasms/diagnostic imaging , Early Detection of Cancer , Mammography/methods , Adult , False Positive Reactions , Female , Humans , Middle Aged , Observer Variation , Radiology , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...