Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
1.
Lancet Oncol ; 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38876123

ABSTRACT

BACKGROUND: Artificial intelligence (AI) systems can potentially aid the diagnostic pathway of prostate cancer by alleviating the increasing workload, preventing overdiagnosis, and reducing the dependence on experienced radiologists. We aimed to investigate the performance of AI systems at detecting clinically significant prostate cancer on MRI in comparison with radiologists using the Prostate Imaging-Reporting and Data System version 2.1 (PI-RADS 2.1) and the standard of care in multidisciplinary routine practice at scale. METHODS: In this international, paired, non-inferiority, confirmatory study, we trained and externally validated an AI system (developed within an international consortium) for detecting Gleason grade group 2 or greater cancers using a retrospective cohort of 10 207 MRI examinations from 9129 patients. Of these examinations, 9207 cases from three centres (11 sites) based in the Netherlands were used for training and tuning, and 1000 cases from four centres (12 sites) based in the Netherlands and Norway were used for testing. In parallel, we facilitated a multireader, multicase observer study with 62 radiologists (45 centres in 20 countries; median 7 [IQR 5-10] years of experience in reading prostate MRI) using PI-RADS (2.1) on 400 paired MRI examinations from the testing cohort. Primary endpoints were the sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC) of the AI system in comparison with that of all readers using PI-RADS (2.1) and in comparison with that of the historical radiology readings made during multidisciplinary routine practice (ie, the standard of care with the aid of patient history and peer consultation). Histopathology and at least 3 years (median 5 [IQR 4-6] years) of follow-up were used to establish the reference standard. The statistical analysis plan was prespecified with a primary hypothesis of non-inferiority (considering a margin of 0·05) and a secondary hypothesis of superiority towards the AI system, if non-inferiority was confirmed. This study was registered at ClinicalTrials.gov, NCT05489341. FINDINGS: Of the 10 207 examinations included from Jan 1, 2012, through Dec 31, 2021, 2440 cases had histologically confirmed Gleason grade group 2 or greater prostate cancer. In the subset of 400 testing cases in which the AI system was compared with the radiologists participating in the reader study, the AI system showed a statistically superior and non-inferior AUROC of 0·91 (95% CI 0·87-0·94; p<0·0001), in comparison to the pool of 62 radiologists with an AUROC of 0·86 (0·83-0·89), with a lower boundary of the two-sided 95% Wald CI for the difference in AUROC of 0·02. At the mean PI-RADS 3 or greater operating point of all readers, the AI system detected 6·8% more cases with Gleason grade group 2 or greater cancers at the same specificity (57·7%, 95% CI 51·6-63·3), or 50·4% fewer false-positive results and 20·0% fewer cases with Gleason grade group 1 cancers at the same sensitivity (89·4%, 95% CI 85·3-92·9). In all 1000 testing cases where the AI system was compared with the radiology readings made during multidisciplinary practice, non-inferiority was not confirmed, as the AI system showed lower specificity (68·9% [95% CI 65·3-72·4] vs 69·0% [65·5-72·5]) at the same sensitivity (96·1%, 94·0-98·2) as the PI-RADS 3 or greater operating point. The lower boundary of the two-sided 95% Wald CI for the difference in specificity (-0·04) was greater than the non-inferiority margin (-0·05) and a p value below the significance threshold was reached (p<0·001). INTERPRETATION: An AI system was superior to radiologists using PI-RADS (2.1), on average, at detecting clinically significant prostate cancer and comparable to the standard of care. Such a system shows the potential to be a supportive tool within a primary diagnostic setting, with several associated benefits for patients and radiologists. Prospective validation is needed to test clinical applicability of this system. FUNDING: Health~Holland and EU Horizon 2020.

2.
Eur Radiol ; 2024 May 24.
Article in English | MEDLINE | ID: mdl-38787428

ABSTRACT

Multiparametric MRI is the optimal primary investigation when prostate cancer is suspected, and its ability to rule in and rule out clinically significant disease relies on high-quality anatomical and functional images. Avenues for achieving consistent high-quality acquisitions include meticulous patient preparation, scanner setup, optimised pulse sequences, personnel training, and artificial intelligence systems. The impact of these interventions on the final images needs to be quantified. The prostate imaging quality (PI-QUAL) scoring system was the first standardised quantification method that demonstrated the potential for clinical benefit by relating image quality to cancer detection ability by MRI. We present the updated version of PI-QUAL (PI-QUAL v2) which applies to prostate MRI performed with or without intravenous contrast medium using a simplified 3-point scale focused on critical technical and qualitative image parameters. CLINICAL RELEVANCE STATEMENT: High image quality is crucial for prostate MRI, and the updated version of the PI-QUAL score (PI-QUAL v2) aims to address the limitations of version 1. It is now applicable to both multiparametric MRI and MRI without intravenous contrast medium. KEY POINTS: High-quality images are essential for prostate cancer diagnosis and management using MRI. PI-QUAL v2 simplifies image assessment and expands its applicability to prostate MRI without contrast medium. PI-QUAL v2 focuses on critical technical and qualitative image parameters and emphasises T2-WI and DWI.

3.
Radiology ; 311(2): e231879, 2024 May.
Article in English | MEDLINE | ID: mdl-38771185

ABSTRACT

Background Multiparametric MRI (mpMRI) is effective for detecting prostate cancer (PCa); however, there is a high rate of equivocal Prostate Imaging Reporting and Data System (PI-RADS) 3 lesions and false-positive findings. Purpose To investigate whether fluorine 18 (18F) prostate-specific membrane antigen (PSMA) 1007 PET/CT after mpMRI can help detect localized clinically significant PCa (csPCa), particularly for equivocal PI-RADS 3 lesions. Materials and Methods This prospective study included participants with elevated prostate-specific antigen (PSA) levels referred for prostate mpMRI between September 2020 and February 2022. 18F-PSMA-1007 PET/CT was performed within 30 days of mpMRI and before biopsy. PI-RADS category and level of suspicion (LOS) were assessed. PI-RADS 3 or higher lesions at mpMRI and/or LOS 3 or higher lesions at 18F-PSMA-1007 PET/CT underwent targeted biopsies. PI-RADS 2 or lower and LOS 2 or lower lesions were considered nonsuspicious and were monitored during a 1-year follow-up by means of PSA testing. Diagnostic accuracy was assessed, with histologic examination serving as the reference standard. International Society of Urological Pathology (ISUP) grade 2 or higher was considered csPCa. Results Seventy-five participants (median age, 67 years [range, 52-77 years]) were assessed, with PI-RADS 1 or 2, PI-RADS 3, and PI-RADS 4 or 5 groups each including 25 participants. A total of 102 lesions were identified, of which 80 were PI-RADS 3 or higher and/or LOS 3 or higher and therefore underwent targeted biopsy. The per-participant sensitivity for the detection of csPCa was 95% and 91% for mpMRI and 18F-PSMA-1007 PET/CT, respectively, with respective specificities of 45% and 62%. 18F-PSMA-1007 PET/CT was used to correctly differentiate 17 of 26 PI-RADS 3 lesions (65%), with a negative and positive predictive value of 93% and 27%, respectively, for ruling out or detecting csPCa. One additional significant and one insignificant PCa lesion (PI-RADS 1 or 2) were found at 18F-PSMA-1007 PET/CT that otherwise would have remained undetected. Two participants had ISUP 2 tumors without PSMA uptake that were missed at PET/CT. Conclusion 18F-PSMA-1007 PET/CT showed good sensitivity and moderate specificity for the detection of csPCa and ruled this out in 93% of participants with PI-RADS 3 lesions. Clinical trial registration no. NCT04487847 © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Turkbey in this issue.


Subject(s)
Fluorine Radioisotopes , Multiparametric Magnetic Resonance Imaging , Positron Emission Tomography Computed Tomography , Prostatic Neoplasms , Humans , Male , Positron Emission Tomography Computed Tomography/methods , Prostatic Neoplasms/diagnostic imaging , Multiparametric Magnetic Resonance Imaging/methods , Prospective Studies , Aged , Middle Aged , Niacinamide/analogs & derivatives , Oligopeptides , Radiopharmaceuticals , Prostate/diagnostic imaging , Sensitivity and Specificity
4.
Radiology ; 310(1): e230981, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38193833

ABSTRACT

Background Multiple commercial artificial intelligence (AI) products exist for assessing radiographs; however, comparable performance data for these algorithms are limited. Purpose To perform an independent, stand-alone validation of commercially available AI products for bone age prediction based on hand radiographs and lung nodule detection on chest radiographs. Materials and Methods This retrospective study was carried out as part of Project AIR. Nine of 17 eligible AI products were validated on data from seven Dutch hospitals. For bone age prediction, the root mean square error (RMSE) and Pearson correlation coefficient were computed. The reference standard was set by three to five expert readers. For lung nodule detection, the area under the receiver operating characteristic curve (AUC) was computed. The reference standard was set by a chest radiologist based on CT. Randomized subsets of hand (n = 95) and chest (n = 140) radiographs were read by 14 and 17 human readers, respectively, with varying experience. Results Two bone age prediction algorithms were tested on hand radiographs (from January 2017 to January 2022) in 326 patients (mean age, 10 years ± 4 [SD]; 173 female patients) and correlated strongly with the reference standard (r = 0.99; P < .001 for both). No difference in RMSE was observed between algorithms (0.63 years [95% CI: 0.58, 0.69] and 0.57 years [95% CI: 0.52, 0.61]) and readers (0.68 years [95% CI: 0.64, 0.73]). Seven lung nodule detection algorithms were validated on chest radiographs (from January 2012 to May 2022) in 386 patients (mean age, 64 years ± 11; 223 male patients). Compared with readers (mean AUC, 0.81 [95% CI: 0.77, 0.85]), four algorithms performed better (AUC range, 0.86-0.93; P value range, <.001 to .04). Conclusions Compared with human readers, four AI algorithms for detecting lung nodules on chest radiographs showed improved performance, whereas the remaining algorithms tested showed no evidence of a difference in performance. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Omoumi and Richiardi in this issue.


Subject(s)
Artificial Intelligence , Software , Humans , Female , Male , Child , Middle Aged , Retrospective Studies , Algorithms , Lung
5.
Eur Radiol ; 34(1): 348-354, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37515632

ABSTRACT

OBJECTIVES: To map the clinical use of CE-marked artificial intelligence (AI)-based software in radiology departments in the Netherlands (n = 69) between 2020 and 2022. MATERIALS AND METHODS: Our AI network (one radiologist or AI representative per Dutch hospital organization) received a questionnaire each spring from 2020 to 2022 about AI product usage, financing, and obstacles to adoption. Products that were not listed on www.AIforRadiology.com by July 2022 were excluded from the analysis. RESULTS: The number of respondents was 43 in 2020, 36 in 2021, and 33 in 2022. The number of departments using AI has been growing steadily (2020: 14, 2021: 19, 2022: 23). The diversity (2020: 7, 2021: 18, 2022: 34) and the number of total implementations (2020: 19, 2021: 38, 2022: 68) has rapidly increased. Seven implementations were discontinued in 2022. Four hospital organizations said to use an AI platform or marketplace for the deployment of AI solutions. AI is mostly used to support chest CT (17), neuro CT (17), and musculoskeletal radiograph (12) analysis. The budget for AI was reserved in 13 of the responding centers in both 2021 and 2022. The most important obstacles to the adoption of AI remained costs and IT integration. Of the respondents, 28% stated that the implemented AI products realized health improvement and 32% assumed both health improvement and cost savings. CONCLUSION: The adoption of AI products in radiology departments in the Netherlands is showing common signs of a developing market. The major obstacles to reaching widespread adoption are a lack of financial resources and IT integration difficulties. CLINICAL RELEVANCE STATEMENT: The clinical impact of AI starts with its adoption in daily clinical practice. Increased transparency around AI products being adopted, implementation obstacles, and impact may inspire increased collaboration and improved decision-making around the implementation and financing of AI products. KEY POINTS: • The adoption of artificial intelligence products for radiology has steadily increased since 2020 to at least a third of the centers using AI in clinical practice in the Netherlands in 2022. • The main areas in which artificial intelligence products are used are lung nodule detection on CT, aided stroke diagnosis, and bone age prediction. • The majority of respondents experienced added value (decreased costs and/or improved outcomes) from using artificial intelligence-based software; however, major obstacles to adoption remain the costs and IT-related difficulties.


Subject(s)
Artificial Intelligence , Radiology , Humans , Netherlands , Radiography , Radiologists
7.
Radiol Artif Intell ; 5(5): e230031, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37795142

ABSTRACT

Purpose: To evaluate a novel method of semisupervised learning (SSL) guided by automated sparse information from diagnostic reports to leverage additional data for deep learning-based malignancy detection in patients with clinically significant prostate cancer. Materials and Methods: This retrospective study included 7756 prostate MRI examinations (6380 patients) performed between January 2014 and December 2020 for model development. An SSL method, report-guided SSL (RG-SSL), was developed for detection of clinically significant prostate cancer using biparametric MRI. RG-SSL, supervised learning (SL), and state-of-the-art SSL methods were trained using 100, 300, 1000, or 3050 manually annotated examinations. Performance on detection of clinically significant prostate cancer by RG-SSL, SL, and SSL was compared on 300 unseen examinations from an external center with a histopathologically confirmed reference standard. Performance was evaluated using receiver operating characteristic (ROC) and free-response ROC analysis. P values for performance differences were generated with a permutation test. Results: At 100 manually annotated examinations, mean examination-based diagnostic area under the ROC curve (AUC) values for RG-SSL, SL, and the best SSL were 0.86 ± 0.01 (SD), 0.78 ± 0.03, and 0.81 ± 0.02, respectively. Lesion-based detection partial AUCs were 0.62 ± 0.02, 0.44 ± 0.04, and 0.48 ± 0.09, respectively. Examination-based performance of SL with 3050 examinations was matched by RG-SSL with 169 manually annotated examinations, thus requiring 14 times fewer annotations. Lesion-based performance was matched with 431 manually annotated examinations, requiring six times fewer annotations. Conclusion: RG-SSL outperformed SSL in clinically significant prostate cancer detection and achieved performance similar to SL even at very low annotation budgets.Keywords: Annotation Efficiency, Computer-aided Detection and Diagnosis, MRI, Prostate Cancer, Semisupervised Deep Learning Supplemental material is available for this article. Published under a CC BY 4.0 license.

8.
Radiology ; 308(3): e230275, 2023 09.
Article in English | MEDLINE | ID: mdl-37724961

ABSTRACT

Background A priori identification of patients at risk of artificial intelligence (AI) failure in diagnosing cancer would contribute to the safer clinical integration of diagnostic algorithms. Purpose To evaluate AI prediction variability as an uncertainty quantification (UQ) metric for identifying cases at risk of AI failure in diagnosing cancer at MRI and CT across different cancer types, data sets, and algorithms. Materials and Methods Multicenter data sets and publicly available AI algorithms from three previous studies that evaluated detection of pancreatic cancer on contrast-enhanced CT images, detection of prostate cancer on MRI scans, and prediction of pulmonary nodule malignancy on low-dose CT images were analyzed retrospectively. Each task's algorithm was extended to generate an uncertainty score based on ensemble prediction variability. AI accuracy percentage and partial area under the receiver operating characteristic curve (pAUC) were compared between certain and uncertain patient groups in a range of percentile thresholds (10%-90%) for the uncertainty score using permutation tests for statistical significance. The pulmonary nodule malignancy prediction algorithm was compared with 11 clinical readers for the certain group (CG) and uncertain group (UG). Results In total, 18 022 images were used for training and 838 images were used for testing. AI diagnostic accuracy was higher for the cases in the CG across all tasks (P < .001). At an 80% threshold of certain predictions, accuracy in the CG was 21%-29% higher than in the UG and 4%-6% higher than in the overall test data sets. The lesion-level pAUC in the CG was 0.25-0.39 higher than in the UG and 0.05-0.08 higher than in the overall test data sets (P < .001). For pulmonary nodule malignancy prediction, accuracy of AI was on par with clinicians for cases in the CG (AI results vs clinician results, 80% [95% CI: 76, 85] vs 78% [95% CI: 70, 87]; P = .07) but worse for cases in the UG (AI results vs clinician results, 50% [95% CI: 37, 64] vs 68% [95% CI: 60, 76]; P < .001). Conclusion An AI-prediction UQ metric consistently identified reduced performance of AI in cancer diagnosis. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Babyn in this issue.


Subject(s)
Lung Neoplasms , Mental Disorders , Male , Humans , Artificial Intelligence , Retrospective Studies , Magnetic Resonance Imaging , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
9.
Eur J Radiol ; 165: 110928, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37354769

ABSTRACT

PURPOSE: The guidelines for prostate cancer recommend the use of MRI in the prostate cancer pathway. Due to the variability in prostate MR image quality, the reliability of this technique in the detection of prostate cancer is highly variable in clinical practice. This leads to the need for an objective and automated assessment of image quality to ensure an adequate acquisition and hereby to improve the reliability of MRI. The aim of this study is to investigate the feasibility of Blind/referenceless image spatial quality evaluator (Brisque) and radiomics in automated image quality assessment of T2-weighted (T2W) images. METHOD: Anonymized axial T2W images from 140 patients were scored for quality using a five-point Likert scale (low, suboptimal, acceptable, good, very good quality) in consensus by two readers. Images were dichotomized into clinically acceptable (very good, good and acceptable quality images) and clinically unacceptable (low and suboptimal quality images) in order to train and verify the model. Radiomics and Brisque features were extracted from a central cuboid volume including the prostate. A reduced feature set was used to fit a Linear Discriminant Analysis (LDA) model to predict image quality. Two hundred times repeated 5-fold cross-validation was used to train the model and test performance by assessing the classification accuracy, the discrimination accuracy as receiver operating curve - area under curve (ROC-AUC), and by generating confusion matrices. RESULTS: Thirty-four images were classified as clinically unacceptable and 106 were classified as clinically acceptable. The accuracy of the independent test set (mean ± standard deviation) was 85.4 ± 5.5%. The ROC-AUC was 0.856 (0.851 - 0.861) (mean; 95% confidence interval). CONCLUSIONS: Radiomics AI can automatically detect a significant portion of T2W images of suboptimal image quality. This can help improve image quality at the time of acquisition, thus reducing repeat scans and improving diagnostic accuracy.


Subject(s)
Prostate , Prostatic Neoplasms , Male , Humans , Prostate/diagnostic imaging , Reproducibility of Results , Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Linear Models , Retrospective Studies
10.
Nat Rev Urol ; 20(1): 9-22, 2023 01.
Article in English | MEDLINE | ID: mdl-36168056

ABSTRACT

Multiparametric MRI of the prostate is now recommended as the initial diagnostic test for men presenting with suspected prostate cancer, with a negative MRI enabling safe avoidance of biopsy and a positive result enabling MRI-directed sampling of lesions. The diagnostic pathway consists of several steps, from initial patient presentation and preparation to performing and interpreting MRI, communicating the imaging findings, outlining the prostate and intra-prostatic target lesions, performing the biopsy and assessing the cores. Each component of this pathway requires experienced clinicians, optimized equipment, good inter-disciplinary communication between specialists, and standardized workflows in order to achieve the expected outcomes. Assessment of quality and mitigation measures are essential for the success of the MRI-directed prostate cancer diagnostic pathway. Quality assurance processes including Prostate Imaging-Reporting and Data System, template biopsy, and pathology guidelines help to minimize variation and ensure optimization of the diagnostic pathway. Quality control systems including the Prostate Imaging Quality scoring system, patient-level outcomes (such as Prostate Imaging-Reporting and Data System MRI score assignment and cancer detection rates), multidisciplinary meeting review and audits might also be used to provide consistency of outcomes and ensure that all the benefits of the MRI-directed pathway are achieved.


Subject(s)
Multiparametric Magnetic Resonance Imaging , Prostatic Neoplasms , Male , Humans , Prostatic Neoplasms/diagnosis , Magnetic Resonance Imaging/methods , Prostate/pathology , Multiparametric Magnetic Resonance Imaging/methods , Biopsy/methods , Image-Guided Biopsy
11.
Pediatr Radiol ; 52(11): 2087-2093, 2022 10.
Article in English | MEDLINE | ID: mdl-34117522

ABSTRACT

Since the introduction of artificial intelligence (AI) in radiology, the promise has been that it will improve health care and reduce costs. Has AI been able to fulfill that promise? We describe six clinical objectives that can be supported by AI: a more efficient workflow, shortened reading time, a reduction of dose and contrast agents, earlier detection of disease, improved diagnostic accuracy and more personalized diagnostics. We provide examples of use cases including the available scientific evidence for its impact based on a hierarchical model of efficacy. We conclude that the market is still maturing and little is known about the contribution of AI to clinical practice. More real-world monitoring of AI in clinical practice is expected to aid in determining the value of AI and making informed decisions on development, procurement and reimbursement.


Subject(s)
Artificial Intelligence , Radiology , Contrast Media , Humans , Outcome Assessment, Health Care , Radiography
12.
Eur Radiol ; 32(2): 876-878, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34842957

ABSTRACT

KEY POINTS: • It is mandatory to evaluate the image quality of a prostate MRI scan, and to mention this quality in the report. • PI-QUAL v1 is an essential starting tool to standardize the evaluation of the quality of prostate MR-images as objectively as possible. • PI-QUAL will step by step develop into a reliable quality assessment tool to ensure that the first step of the MRI pathway is as accurate as possible.


Subject(s)
Prostate , Prostatic Neoplasms , Humans , Magnetic Resonance Imaging , Male , Pelvis , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging
13.
Eur Radiol ; 32(4): 2224-2234, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34786615

ABSTRACT

OBJECTIVES: To assess Prostate Imaging Reporting and Data System (PI-RADS)-trained deep learning (DL) algorithm performance and to investigate the effect of data size and prior knowledge on the detection of clinically significant prostate cancer (csPCa) in biopsy-naïve men with a suspicion of PCa. METHODS: Multi-institution data included 2734 consecutive biopsy-naïve men with elevated PSA levels (≥ 3 ng/mL) that underwent multi-parametric MRI (mpMRI). mpMRI exams were prospectively reported using PI-RADS v2 by expert radiologists. A DL framework was designed and trained on center 1 data (n = 1952) to predict PI-RADS ≥ 4 (n = 1092) lesions from bi-parametric MRI (bpMRI). Experiments included varying the number of cases and the use of automatic zonal segmentation as a DL prior. Independent center 2 cases (n = 296) that included pathology outcome (systematic and MRI targeted biopsy) were used to compute performance for radiologists and DL. The performance of detecting PI-RADS 4-5 and Gleason > 6 lesions was assessed on 782 unseen cases (486 center 1, 296 center 2) using free-response ROC (FROC) and ROC analysis. RESULTS: The DL sensitivity for detecting PI-RADS ≥ 4 lesions was 87% (193/223, 95% CI: 82-91) at an average of 1 false positive (FP) per patient, and an AUC of 0.88 (95% CI: 0.84-0.91). The DL sensitivity for the detection of Gleason > 6 lesions was 85% (79/93, 95% CI: 77-83) @ 1 FP compared to 91% (85/93, 95% CI: 84-96) @ 0.3 FP for a consensus panel of expert radiologists. Data size and prior zonal knowledge significantly affected performance (4%, [Formula: see text]). CONCLUSION: PI-RADS-trained DL can accurately detect and localize Gleason > 6 lesions. DL could reach expert performance using substantially more than 2000 training cases, and DL zonal segmentation. KEY POINTS: • AI for prostate MRI analysis depends strongly on data size and prior zonal knowledge. • AI needs substantially more than 2000 training cases to achieve expert performance.


Subject(s)
Deep Learning , Multiparametric Magnetic Resonance Imaging , Prostatic Neoplasms , Humans , Image-Guided Biopsy , Magnetic Resonance Imaging/methods , Male , Prostate/pathology , Prostatic Neoplasms/pathology , Retrospective Studies
14.
Eur Urol Focus ; 8(5): 1187-1191, 2022 09.
Article in English | MEDLINE | ID: mdl-34922897

ABSTRACT

Magnetic resonance imaging (MRI) has transformed the diagnostic pathway for prostate cancer and now plays an upfront role before prostate biopsies. If a suspicious lesion is found on MRI, the subsequent biopsy can be targeted. A sharp increase is expected in the number of men who will undergo prostate MRI. The challenge is to provide good image quality and diagnostic accuracy while meeting the demands of the expected higher workload. A possible solution to this challenge is to include a suitable risk stratification tool before imaging. Other solutions, such as smarter and shorter MRI protocols, need to be explored. For most of these solutions, artificial intelligence (AI) can play an important role. AI applications have the potential to improve the diagnostic quality of the prostate MRI pathway and speed up the work. PATIENT SUMMARY: The use of prostate magnetic resonance imaging (MRI) for diagnosis of prostate cancer is increasing. Risk stratification of patients before imaging and the use of shorter scan protocols can help in managing MRI resources. Artificial intelligence can also play a role in automating some tasks.


Subject(s)
Image-Guided Biopsy , Prostatic Neoplasms , Male , Humans , Image-Guided Biopsy/methods , Artificial Intelligence , Early Detection of Cancer , Prostatic Neoplasms/pathology , Magnetic Resonance Imaging/methods , Risk Assessment
15.
Insights Imaging ; 12(1): 133, 2021 Sep 25.
Article in English | MEDLINE | ID: mdl-34564764

ABSTRACT

BACKGROUND: Limited evidence is available on the clinical impact of artificial intelligence (AI) in radiology. Early health technology assessment (HTA) is a methodology to assess the potential value of an innovation at an early stage. We use early HTA to evaluate the potential value of AI software in radiology. As a use-case, we evaluate the cost-effectiveness of AI software aiding the detection of intracranial large vessel occlusions (LVO) in stroke in comparison to standard care. We used a Markov based model from a societal perspective of the United Kingdom predominantly using stroke registry data complemented with pooled outcome data from large, randomized trials. Different scenarios were explored by varying missed diagnoses of LVOs, AI costs and AI performance. Other input parameters were varied to demonstrate model robustness. Results were reported in expected incremental costs (IC) and effects (IE) expressed in quality adjusted life years (QALYs). RESULTS: Applying the base case assumptions (6% missed diagnoses of LVOs by clinicians, $40 per AI analysis, 50% reduction of missed LVOs by AI), resulted in cost-savings and incremental QALYs over the projected lifetime (IC: - $156, - 0.23%; IE: + 0.01 QALYs, + 0.07%) per suspected ischemic stroke patient. For each yearly cohort of patients in the UK this translates to a total cost saving of $11 million. CONCLUSIONS: AI tools for LVO detection in emergency care have the potential to improve healthcare outcomes and save costs. We demonstrate how early HTA may be applied for the evaluation of clinically applied AI software for radiology.

16.
Radiol Artif Intell ; 3(4): e200260, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34350413

ABSTRACT

PURPOSE: To compare the performance of a convolutional neural network (CNN) to that of 11 radiologists in detecting scaphoid bone fractures on conventional radiographs of the hand, wrist, and scaphoid. MATERIALS AND METHODS: At two hospitals (hospitals A and B), three datasets consisting of conventional hand, wrist, and scaphoid radiographs were retrospectively retrieved: a dataset of 1039 radiographs (775 patients [mean age, 48 years ± 23 {standard deviation}; 505 female patients], period: 2017-2019, hospitals A and B) for developing a scaphoid segmentation CNN, a dataset of 3000 radiographs (1846 patients [mean age, 42 years ± 22; 937 female patients], period: 2003-2019, hospital B) for developing a scaphoid fracture detection CNN, and a dataset of 190 radiographs (190 patients [mean age, 43 years ± 20; 77 female patients], period: 2011-2020, hospital A) for testing the complete fracture detection system. Both CNNs were applied consecutively: The segmentation CNN localized the scaphoid and then passed the relevant region to the detection CNN for fracture detection. In an observer study, the performance of the system was compared with that of 11 radiologists. Evaluation metrics included the Dice similarity coefficient (DSC), Hausdorff distance (HD), sensitivity, specificity, positive predictive value (PPV), and area under the receiver operating characteristic curve (AUC). RESULTS: The segmentation CNN achieved a DSC of 97.4% ± 1.4 with an HD of 1.31 mm ± 1.03. The detection CNN had sensitivity of 78% (95% CI: 70, 86), specificity of 84% (95% CI: 77, 92), PPV of 83% (95% CI: 77, 90), and AUC of 0.87 (95% CI: 0.81, 0.91). There was no difference between the AUC of the CNN and that of the radiologists (0.87 [95% CI: 0.81, 0.91] vs 0.83 [radiologist range: 0.79-0.85]; P = .09). CONCLUSION: The developed CNN achieved radiologist-level performance in detecting scaphoid bone fractures on conventional radiographs of the hand, wrist, and scaphoid.Keywords: Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Feature Detection-Vision-Application Domain, Computer-Aided DiagnosisSee also the commentary by Li and Torriani in this issue.Supplemental material is available for this article.©RSNA, 2021.

17.
Diagnostics (Basel) ; 11(6)2021 May 26.
Article in English | MEDLINE | ID: mdl-34073627

ABSTRACT

Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments.

18.
Eur Radiol ; 31(6): 3797-3804, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33856519

ABSTRACT

OBJECTIVES: Map the current landscape of commercially available artificial intelligence (AI) software for radiology and review the availability of their scientific evidence. METHODS: We created an online overview of CE-marked AI software products for clinical radiology based on vendor-supplied product specifications ( www.aiforradiology.com ). Characteristics such as modality, subspeciality, main task, regulatory information, deployment, and pricing model were retrieved. We conducted an extensive literature search on the available scientific evidence of these products. Articles were classified according to a hierarchical model of efficacy. RESULTS: The overview included 100 CE-marked AI products from 54 different vendors. For 64/100 products, there was no peer-reviewed evidence of its efficacy. We observed a large heterogeneity in deployment methods, pricing models, and regulatory classes. The evidence of the remaining 36/100 products comprised 237 papers that predominantly (65%) focused on diagnostic accuracy (efficacy level 2). From the 100 products, 18 had evidence that regarded level 3 or higher, validating the (potential) impact on diagnostic thinking, patient outcome, or costs. Half of the available evidence (116/237) were independent and not (co-)funded or (co-)authored by the vendor. CONCLUSIONS: Even though the commercial supply of AI software in radiology already holds 100 CE-marked products, we conclude that the sector is still in its infancy. For 64/100 products, peer-reviewed evidence on its efficacy is lacking. Only 18/100 AI products have demonstrated (potential) clinical impact. KEY POINTS: • Artificial intelligence in radiology is still in its infancy even though already 100 CE-marked AI products are commercially available. • Only 36 out of 100 products have peer-reviewed evidence of which most studies demonstrate lower levels of efficacy. • There is a wide variety in deployment strategies, pricing models, and CE marking class of AI products for radiology.


Subject(s)
Artificial Intelligence , Radiology , Humans , Radiography , Software
20.
Eur Radiol ; 30(10): 5404-5416, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32424596

ABSTRACT

OBJECTIVES: This study aims to define consensus-based criteria for acquiring and reporting prostate MRI and establishing prerequisites for image quality. METHODS: A total of 44 leading urologists and urogenital radiologists who are experts in prostate cancer imaging from the European Society of Urogenital Radiology (ESUR) and EAU Section of Urologic Imaging (ESUI) participated in a Delphi consensus process. Panellists completed two rounds of questionnaires with 55 items under three headings: image quality assessment, interpretation and reporting, and radiologists' experience plus training centres. Of 55 questions, 31 were rated for agreement on a 9-point scale, and 24 were multiple-choice or open. For agreement items, there was consensus agreement with an agreement ≥ 70% (score 7-9) and disagreement of ≤ 15% of the panellists. For the other questions, a consensus was considered with ≥ 50% of votes. RESULTS: Twenty-four out of 31 of agreement items and 11/16 of other questions reached consensus. Agreement statements were (1) reporting of image quality should be performed and implemented into clinical practice; (2) for interpretation performance, radiologists should use self-performance tests with histopathology feedback, compare their interpretation with expert-reading and use external performance assessments; and (3) radiologists must attend theoretical and hands-on courses before interpreting prostate MRI. Limitations are that the results are expert opinions and not based on systematic reviews or meta-analyses. There was no consensus on outcomes statements of prostate MRI assessment as quality marker. CONCLUSIONS: An ESUR and ESUI expert panel showed high agreement (74%) on issues improving prostate MRI quality. Checking and reporting of image quality are mandatory. Prostate radiologists should attend theoretical and hands-on courses, followed by supervised education, and must perform regular performance assessments. KEY POINTS: • Multi-parametric MRI in the diagnostic pathway of prostate cancer has a well-established upfront role in the recently updated European Association of Urology guideline and American Urological Association recommendations. • Suboptimal image acquisition and reporting at an individual level will result in clinicians losing confidence in the technique and returning to the (non-MRI) systematic biopsy pathway. Therefore, it is crucial to establish quality criteria for the acquisition and reporting of mpMRI. • To ensure high-quality prostate MRI, experts consider checking and reporting of image quality mandatory. Prostate radiologists must attend theoretical and hands-on courses, followed by supervised education, and must perform regular self- and external performance assessments.


Subject(s)
Multiparametric Magnetic Resonance Imaging/standards , Prostatic Neoplasms/diagnostic imaging , Radiology/education , Urology/education , Delphi Technique , Education, Medical, Continuing , Humans , Image Processing, Computer-Assisted , Image-Guided Biopsy , Male , Prostatic Neoplasms/pathology , Radiology/standards , Urology/standards
SELECTION OF CITATIONS
SEARCH DETAIL
...