Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Eur J Radiol Open ; 10: 100482, 2023.
Article in English | MEDLINE | ID: mdl-36941993

ABSTRACT

Rationale and objectives: Triage and diagnostic deep learning-based support solutions have started to take hold in everyday emergency radiology practice with the hope of alleviating workflows. Although previous works had proven that artificial intelligence (AI) may increase radiologist and/or emergency physician reading performances, they were restricted to finding, bodypart and/or age subgroups, without evaluating a routine emergency workflow composed of chest and musculoskeletal adult and pediatric cases. We aimed at evaluating a multiple musculoskeletal and chest radiographic findings deep learning-based commercial solution on an adult and pediatric emergency workflow, focusing on discrepancies between emergency and radiology physicians. Material and methods: This retrospective, monocentric and observational study included 1772 patients who underwent an emergency radiograph between July and October 2020, excluding spine, skull and plain abdomen procedures. Emergency and radiology reports, obtained without AI as part of the clinical workflow, were collected and discordant cases were reviewed to obtain the radiology reference standard. Case-level AI outputs and emergency reports were compared to the reference standard. DeLong and Wald tests were used to compare ROC-AUC and Sensitivity/Specificity, respectively. Results: Results showed an overall AI ROC-AUC of 0.954 with no difference across age or body part subgroups. Real-life emergency physicians' sensitivity was 93.7 %, not significantly different to the AI model (P = 0.105), however in 172/1772 (9.7 %) cases misdiagnosed by emergency physicians. In this subset, AI accuracy was 90.1 %. Conclusion: This study highlighted that multiple findings AI solution for emergency radiographs is efficient and complementary to emergency physicians, and could help reduce misdiagnosis in the absence of immediate radiological expertize.

2.
Breast Cancer ; 29(6): 967-977, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35763243

ABSTRACT

OBJECTIVES: To demonstrate that radiologists, with the help of artificial intelligence (AI), are able to better classify screening mammograms into the correct breast imaging reporting and data system (BI-RADS) category, and as a secondary objective, to explore the impact of AI on cancer detection and mammogram interpretation time. METHODS: A multi-reader, multi-case study with cross-over design, was performed, including 314 mammograms. Twelve radiologists interpreted the examinations in two sessions delayed by a 4 weeks wash-out period with and without AI support. For each breast of each mammogram, they had to mark the most suspicious lesion (if any) and assign it with a forced BI-RADS category and a level of suspicion or "continuous BI-RADS 100". Cohen's kappa correlation coefficient evaluating the inter-observer agreement for BI-RADS category per breast, and the area under the receiver operating characteristic curve (AUC), were used as metrics and analyzed. RESULTS: On average, the quadratic kappa coefficient increased significantly when using AI for all readers [κ = 0.549, 95% CI (0.528-0.571) without AI and κ = 0.626, 95% CI (0.607-0.6455) with AI]. AUC was significantly improved when using AI (0.74 vs 0.77, p = 0.004). Reading time was not significantly affected for all readers (106 s without AI and vs 102 s with AI; p = 0.754). CONCLUSIONS: When using AI, radiologists were able to better assign mammograms with the correct BI-RADS category without slowing down the interpretation time.


Subject(s)
Breast Neoplasms , Female , Humans , Artificial Intelligence , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Early Detection of Cancer , Mammography/methods , Observer Variation , Cross-Over Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...