Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
ArXiv ; 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38903734

RESUMO

Introduction: This study explores the use of the latest You Only Look Once (YOLO V7) object detection method to enhance kidney detection in medical imaging by training and testing a modified YOLO V7 on medical image formats. Methods: Study includes 878 patients with various subtypes of renal cell carcinoma (RCC) and 206 patients with normal kidneys. A total of 5657 MRI scans for 1084 patients were retrieved. 326 patients with 1034 tumors recruited from a retrospective maintained database, and bounding boxes were drawn around their tumors. A primary model was trained on 80% of annotated cases, with 20% saved for testing (primary test set). The best primary model was then used to identify tumors in the remaining 861 patients and bounding box coordinates were generated on their scans using the model. Ten benchmark training sets were created with generated coordinates on not-segmented patients. The final model used to predict the kidney in the primary test set. We reported the positive predictive value (PPV), sensitivity, and mean average precision (mAP). Results: The primary training set showed an average PPV of 0.94 ± 0.01, sensitivity of 0.87 ± 0.04, and mAP of 0.91 ± 0.02. The best primary model yielded a PPV of 0.97, sensitivity of 0.92, and mAP of 0.95. The final model demonstrated an average PPV of 0.95 ± 0.03, sensitivity of 0.98 ± 0.004, and mAP of 0.95 ± 0.01. Conclusion: Using a semi-supervised approach with a medical image library, we developed a high-performing model for kidney detection. Further external validation is required to assess the model's generalizability.

2.
Radiology ; 311(2): e230750, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38713024

RESUMO

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Idoso , Estudos Prospectivos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Pessoa de Meia-Idade , Algoritmos , Próstata/diagnóstico por imagem , Próstata/patologia , Biópsia Guiada por Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
3.
Oncotarget ; 15: 288-300, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38712741

RESUMO

PURPOSE: Sequential PET/CT studies oncology patients can undergo during their treatment follow-up course is limited by radiation dosage. We propose an artificial intelligence (AI) tool to produce attenuation-corrected PET (AC-PET) images from non-attenuation-corrected PET (NAC-PET) images to reduce need for low-dose CT scans. METHODS: A deep learning algorithm based on 2D Pix-2-Pix generative adversarial network (GAN) architecture was developed from paired AC-PET and NAC-PET images. 18F-DCFPyL PSMA PET-CT studies from 302 prostate cancer patients, split into training, validation, and testing cohorts (n = 183, 60, 59, respectively). Models were trained with two normalization strategies: Standard Uptake Value (SUV)-based and SUV-Nyul-based. Scan-level performance was evaluated by normalized mean square error (NMSE), mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Lesion-level analysis was performed in regions-of-interest prospectively from nuclear medicine physicians. SUV metrics were evaluated using intraclass correlation coefficient (ICC), repeatability coefficient (RC), and linear mixed-effects modeling. RESULTS: Median NMSE, MAE, SSIM, and PSNR were 13.26%, 3.59%, 0.891, and 26.82, respectively, in the independent test cohort. ICC for SUVmax and SUVmean were 0.88 and 0.89, which indicated a high correlation between original and AI-generated quantitative imaging markers. Lesion location, density (Hounsfield units), and lesion uptake were all shown to impact relative error in generated SUV metrics (all p < 0.05). CONCLUSION: The Pix-2-Pix GAN model for generating AC-PET demonstrates SUV metrics that highly correlate with original images. AI-generated PET images show clinical potential for reducing the need for CT scans for attenuation correction while preserving quantitative markers and image quality.


Assuntos
Aprendizado Profundo , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias da Próstata , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Idoso , Pessoa de Meia-Idade , Glutamato Carboxipeptidase II/metabolismo , Antígenos de Superfície/metabolismo , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Compostos Radiofarmacêuticos , Reprodutibilidade dos Testes
4.
Abdom Radiol (NY) ; 49(5): 1545-1556, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38512516

RESUMO

OBJECTIVE: Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS: A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS: 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION: Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Neoplasias da Próstata/patologia , Interpretação de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Idoso , Próstata/diagnóstico por imagem , Aprendizado Profundo
5.
Abdom Radiol (NY) ; 49(4): 1194-1201, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38368481

RESUMO

INTRODUCTION: Accurate diagnosis and treatment of kidney tumors greatly benefit from automated solutions for detection and classification on MRI. In this study, we explore the application of a deep learning algorithm, YOLOv7, for detecting kidney tumors on contrast-enhanced MRI. MATERIAL AND METHODS: We assessed the performance of YOLOv7 tumor detection on excretory phase MRIs in a large institutional cohort of patients with RCC. Tumors were segmented on MRI using ITK-SNAP and converted to bounding boxes. The cohort was randomly divided into ten benchmarks for training and testing the YOLOv7 algorithm. The model was evaluated using both 2-dimensional and a novel in-house developed 2.5-dimensional approach. Performance measures included F1, Positive Predictive Value (PPV), Sensitivity, F1 curve, PPV-Sensitivity curve, Intersection over Union (IoU), and mean average PPV (mAP). RESULTS: A total of 326 patients with 1034 tumors with 7 different pathologies were analyzed across ten benchmarks. The average 2D evaluation results were as follows: Positive Predictive Value (PPV) of 0.69 ± 0.05, sensitivity of 0.39 ± 0.02, and F1 score of 0.43 ± 0.03. For the 2.5D evaluation, the average results included a PPV of 0.72 ± 0.06, sensitivity of 0.61 ± 0.06, and F1 score of 0.66 ± 0.04. The best model performance demonstrated a 2.5D PPV of 0.75, sensitivity of 0.69, and F1 score of 0.72. CONCLUSION: Using computer vision for tumor identification is a cutting-edge and rapidly expanding subject. In this work, we showed that YOLOv7 can be utilized in the detection of kidney cancers.


Assuntos
Carcinoma de Células Renais , Aprendizado Profundo , Neoplasias Renais , Humanos , Algoritmos , Carcinoma de Células Renais/diagnóstico por imagem , Neoplasias Renais/diagnóstico por imagem , Imageamento por Ressonância Magnética , Distribuição Aleatória
6.
Abdom Radiol (NY) ; 49(4): 1202-1209, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38347265

RESUMO

INTRODUCTION: Classification of clear cell renal cell carcinoma (ccRCC) growth rates in patients with Von Hippel-Lindau (VHL) syndrome has several ramifications for tumor monitoring and surgical planning. Using two separate machine-learning algorithms, we sought to produce models to predict ccRCC growth rate classes based on qualitative MRI-derived characteristics. MATERIAL AND METHODS: We used a prospectively maintained database of patients with VHL who underwent surgical resection for ccRCC between January 2015 and June 2022. We employed a threshold growth rate of 0.5 cm per year to categorize ccRCC tumors into two distinct groups-'slow-growing' and 'fast-growing'. Utilizing a questionnaire of qualitative imaging features, two radiologists assessed each lesion on different MRI sequences. Two machine-learning models, a stacked ensemble technique and a decision tree algorithm, were used to predict the tumor growth rate classes. Positive predictive value (PPV), sensitivity, and F1-score were used to evaluate the performance of the models. RESULTS: This study comprises 55 patients with VHL with 128 ccRCC tumors. Patients' median age was 48 years, and 28 patients were males. Each patient had an average of two tumors, with a median size of 2.1 cm and a median growth rate of 0.35 cm/year. The overall performance of the stacked and DT model had 0.77 ± 0.05 and 0.71 ± 0.06 accuracies, respectively. The best stacked model achieved a PPV of 0.92, a sensitivity of 0.91, and an F1-score of 0.90. CONCLUSION: This study provides valuable insight into the potential of machine-learning analysis for the determination of renal tumor growth rate in patients with VHL. This finding could be utilized as an assistive tool for the individualized screening and follow-up of this population.


Assuntos
Carcinoma de Células Renais , Carcinoma , Neoplasias Renais , Masculino , Humanos , Pessoa de Meia-Idade , Feminino , Carcinoma de Células Renais/diagnóstico por imagem , Carcinoma de Células Renais/patologia , Rim/diagnóstico por imagem , Rim/patologia , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/cirurgia , Imageamento por Ressonância Magnética , Aprendizado de Máquina
7.
J Magn Reson Imaging ; 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38299714

RESUMO

BACKGROUND: Pathology grading is an essential step for the treatment and evaluation of the prognosis in patients with clear cell renal cell carcinoma (ccRCC). PURPOSE: To investigate the utility of texture analysis in evaluating Fuhrman grades of renal tumors in patients with Von Hippel-Lindau (VHL)-associated ccRCC, aiming to improve non-invasive diagnosis and personalized treatment. STUDY TYPE: Retrospective analysis of a prospectively maintained cohort. POPULATION: One hundred and thirty-six patients, 84 (61%) males and 52 (39%) females with pathology-proven ccRCC with a mean age of 52.8 ± 12.7 from 2010 to 2023. FIELD STRENGTH AND SEQUENCES: 1.5 and 3 T MRIs. Segmentations were performed on the T1-weighted 3-minute delayed sequence and then registered on pre-contrast, T1-weighted arterial and venous sequences. ASSESSMENT: A total of 404 lesions, 345 low-grade tumors, and 59 high-grade tumors were segmented using ITK-SNAP on a T1-weighted 3-minute delayed sequence of MRI. Radiomics features were extracted from pre-contrast, T1-weighted arterial, venous, and delayed post-contrast sequences. Preprocessing techniques were employed to address class imbalances. Features were then rescaled to normalize the numeric values. We developed a stacked model combining random forest and XGBoost to assess tumor grades using radiomics signatures. STATISTICAL TESTS: The model's performance was evaluated using positive predictive value (PPV), sensitivity, F1 score, area under the curve of receiver operating characteristic curve, and Matthews correlation coefficient. Using Monte Carlo technique, the average performance of 100 benchmarks of 85% train and 15% test was reported. RESULTS: The best model displayed an accuracy of 0.79. For low-grade tumor detection, a sensitivity of 0.79, a PPV of 0.95, and an F1 score of 0.86 were obtained. For high-grade tumor detection, a sensitivity of 0.78, PPV of 0.39, and F1 score of 0.52 were reported. DATA CONCLUSION: Radiomics analysis shows promise in classifying pathology grades non-invasively for patients with VHL-associated ccRCC, potentially leading to better diagnosis and personalized treatment. LEVEL OF EVIDENCE: 1 TECHNICAL EFFICACY: Stage 2.

8.
Acad Radiol ; 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38262813

RESUMO

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.

9.
AJR Am J Roentgenol ; 222(1): e2329964, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37729551

RESUMO

BACKGROUND. Precise risk stratification through MRI/ultrasound (US) fusion-guided targeted biopsy (TBx) can guide optimal prostate cancer (PCa) management. OBJECTIVE. The purpose of this study was to compare PI-RADS version 2.0 (v2.0) and PI-RADS version 2.1 (v2.1) in terms of the rates of International Society of Urological Pathology (ISUP) grade group (GG) upgrade and downgrade from TBx to radical prostatectomy (RP). METHODS. This study entailed a retrospective post hoc analysis of patients who underwent 3-T prostate MRI at a single institution from May 2015 to March 2023 as part of three prospective clinical trials. Trial participants who underwent MRI followed by MRI/US fusion-guided TBx and RP within a 1-year interval were identified. A single genitourinary radiologist performed clinical interpretations of the MRI examinations using PI-RADS v2.0 from May 2015 to March 2019 and PI-RADS v2.1 from April 2019 to March 2023. Upgrade and downgrade rates from TBx to RP were compared using chi-square tests. Clinically significant cancer was defined as ISUP GG2 or greater. RESULTS. The final analysis included 308 patients (median age, 65 years; median PSA density, 0.16 ng/mL2). The v2.0 group (n = 177) and v2.1 group (n = 131) showed no significant difference in terms of upgrade rate (29% vs 22%, respectively; p = .15), downgrade rate (19% vs 21%, p = .76), clinically significant upgrade rate (14% vs 10%, p = .27), or clinically significant downgrade rate (1% vs 1%, p > .99). The upgrade rate and downgrade rate were also not significantly different between the v2.0 and v2.1 groups when stratifying by index lesion PI-RADS category or index lesion zone, as well as when assessed only in patients without a prior PCa diagnosis (all p > .01). Among patients with GG2 or GG3 at RP (n = 121 for v2.0; n = 103 for v2.1), the concordance rate between TBx and RP was not significantly different between the v2.0 and v2.1 groups (53% vs 57%, p = .51). CONCLUSION. Upgrade and downgrade rates from TBx to RP were not significantly different between patients whose MRI examinations were clinically interpreted using v2.0 or v2.1. CLINICAL IMPACT. Implementation of the most recent PI-RADS update did not improve the incongruence in PCa grade assessment between TBx and surgery.


Assuntos
Neoplasias da Próstata , Masculino , Humanos , Idoso , Neoplasias da Próstata/patologia , Imageamento por Ressonância Magnética/métodos , Próstata/patologia , Estudos Retrospectivos , Estudos Prospectivos , Biópsia , Prostatectomia/métodos , Biópsia Guiada por Imagem/métodos
10.
Acad Radiol ; 31(4): 1429-1437, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37858505

RESUMO

RATIONALE AND OBJECTIVES: Prostate MRI quality is essential in guiding prostate biopsies. However, assessment of MRI quality is subjective with variation. Quality degradation sources exert varying impacts based on the sequence under consideration, such as T2W versus DWI. As a result, employing sequence-specific techniques for quality assessment could yield more advantageous outcomes. This study aims to develop an AI tool that offers a more consistent evaluation of T2W prostate MRI quality, efficiently identifying suboptimal scans while minimizing user bias. MATERIALS AND METHODS: This retrospective study included 1046 patients from three cohorts (ProstateX [n = 347], All-comer in-house [n = 602], enriched bad-quality MRI in-house [n = 97]) scanned between January 2011 and May 2022. An expert reader assigned T2W MRIs a quality score. A train-validation-test split of 70:15:15 was applied, ensuring equal distribution of MRI scanners and protocols across all partitions. T2W quality AI classification model was based on 3D DenseNet121 architecture using MONAI framework. In addition to multiclassification, binary classification was utilized (Classes 0/1 vs. 2). A score of 0 was given to scans considered non-diagnostic or unusable, a score of 1 was given to those with acceptable diagnostic quality with some usability but with some quality distortions present, and a score of 2 was given to those considered optimal diagnostic quality and usability. Partial occlusion sensitivity maps were generated for anatomical correlation. Three body radiologists assessed reproducibility within a subgroup of 60 test cases using weighted Cohen Kappa. RESULTS: The best validation multiclass accuracy of 77.1% (121/157) was achieved during training. In the test dataset, multiclassification accuracy was 73.9% (116/157), whereas binary accuracy was 84.7% (133/157). Sub-class sensitivity for binary quality distortion classification for class 0 was 100% (18/18), and sub-class specificity for T2W classification of absence/minimal quality distortions for class 2 was 90.5% (95/105). All three readers showed moderate to substantial agreement with ground truth (R1-R3 κ = 0.588, κ = 0.649, κ = 0.487, respectively), moderate to substantial agreement with each other (R1-R2 κ = 0.599, R1-R3 κ = 0.612, R2-R3 κ = 0.685), fair to moderate agreement with AI (R1-R3 κ = 0.445, κ = 0.410, κ = 0.292, respectively). AI showed substantial agreement with ground truth (κ = 0.704). 3D quality heatmap evaluation revealed that the most critical non-diagnostic quality imaging features from an AI perspective related to obscuration of the rectoprostatic space (94.4%, 17/18). CONCLUSION: The 3D AI model can assess T2W prostate MRI quality with moderate accuracy and translate whole sequence-level classification labels into 3D voxel-level quality heatmaps for interpretation. Image quality has a significant downstream impact on ruling out clinically significant cancers. AI may be able to help with reproducible identification of MRI sequences requiring re-acquisition with explainability.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Estudos Retrospectivos , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
11.
J Magn Reson Imaging ; 2023 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-37811666

RESUMO

BACKGROUND: Image quality evaluation of prostate MRI is important for successful implementation of MRI into localized prostate cancer diagnosis. PURPOSE: To examine the impact of image quality on prostate cancer detection using an in-house previously developed artificial intelligence (AI) algorithm. STUDY TYPE: Retrospective. SUBJECTS: 615 consecutive patients (median age 67 [interquartile range [IQR]: 61-71] years) with elevated serum PSA (median PSA 6.6 [IQR: 4.6-9.8] ng/mL) prior to prostate biopsy. FIELD STRENGTH/SEQUENCE: 3.0T/T2-weighted turbo-spin-echo MRI, high b-value echo-planar diffusion-weighted imaging, and gradient recalled echo dynamic contrast-enhanced. ASSESSMENTS: Scans were prospectively evaluated during clinical readout using PI-RADSv2.1 by one genitourinary radiologist with 17 years of experience. For each patient, T2-weighted images (T2WIs) were classified as high-quality or low-quality based on evaluation of both general distortions (eg, motion, distortion, noise, and aliasing) and perceptual distortions (eg, obscured delineation of prostatic capsule, prostatic zones, and excess rectal gas) by a previously developed in-house AI algorithm. Patients with PI-RADS category 1 underwent 12-core ultrasound-guided systematic biopsy while those with PI-RADS category 2-5 underwent combined systematic and targeted biopsies. Patient-level cancer detection rates (CDRs) were calculated for clinically significant prostate cancer (csPCa, International Society of Urological Pathology Grade Group ≥2) by each biopsy method and compared between high- and low-quality images in each PI-RADS category. STATISTICAL TESTS: Fisher's exact test. Bootstrap 95% confidence intervals (CI). A P value <0.05 was considered statistically significant. RESULTS: 385 (63%) T2WIs were classified as high-quality and 230 (37%) as low-quality by AI. Targeted biopsy with high-quality T2WIs resulted in significantly higher clinically significant CDR than low-quality images for PI-RADS category 4 lesions (52% [95% CI: 43-61] vs. 32% [95% CI: 22-42]). For combined biopsy, there was no significant difference in patient-level CDRs for PI-RADS 4 between high- and low-quality T2WIs (56% [95% CI: 47-64] vs. 44% [95% CI: 34-55]; P = 0.09). DATA CONCLUSION: Higher quality T2WIs were associated with better targeted biopsy clinically significant cancer detection performance for PI-RADS 4 lesions. Combined biopsy might be needed when T2WI is lower quality. LEVEL OF EVIDENCE: 2 TECHNICAL EFFICACY: Stage 1.

12.
AJR Am J Roentgenol ; 221(6): 773-787, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37404084

RESUMO

BACKGROUND. Currently most clinical models for predicting biochemical recurrence (BCR) of prostate cancer (PCa) after radical prostatectomy (RP) incorporate staging information from RP specimens, creating a gap in preoperative risk assessment. OBJECTIVE. The purpose of our study was to compare the utility of presurgical staging information from MRI and postsurgical staging information from RP pathology in predicting BCR in patients with PCa. METHODS. This retrospective study included 604 patients (median age, 60 years) with PCa who underwent prostate MRI before RP from June 2007 to December 2018. A single genitourinary radiologist assessed MRI examinations for extraprostatic extension (EPE) and seminal vesicle invasion (SVI) during clinical interpretations. The utility of EPE and SVI on MRI and RP pathology for BCR prediction was assessed through Kaplan-Meier and Cox proportional hazards analyses. Established clinical BCR prediction models, including the University of California San Francisco Cancer of the Prostate Risk Assessment (UCSF-CAPRA) model and the Cancer of the Prostate Risk Assessment Postsurgical (CAPRA-S) model, were evaluated in a subset of 374 patients with available Gleason grade groups from biopsy and RP pathology; two CAPRA-MRI models (CAPRA-S model with modifications to replace RP pathologic staging features with MRI staging features) were also assessed. RESULTS. Univariable predictors of BCR included EPE on MRI (HR = 3.6), SVI on MRI (HR = 4.4), EPE on RP pathology (HR = 5.0), and SVI on RP pathology (HR = 4.6) (all p < .001). Three-year BCR-free survival (RFS) rates for patients without versus with EPE were 84% versus 59% for MRI and 89% versus 58% for RP pathology, and 3-year RFS rates for patients without versus with SVI were 82% versus 50% for MRI and 83% versus 54% for RP histology (all p < .001). For patients with T3 disease on RP pathology, 3-year RFS rates were 67% and 41% for patients without and with T3 disease on MRI. AUCs of CAPRA models, including CAPRA-MRI models, ranged from 0.743 to 0.778. AUCs were not significantly different between CAPRA-S and CAPRA-MRI models (p > .05). RFS rates were significantly different between low- and intermediate-risk groups for only CAPRA-MRI models (80% vs 51% and 74% vs 44%; both p < .001). CONCLUSION. Presurgical MRI-based staging features perform comparably to postsurgical pathologic staging features for predicting BCR. CLINICAL IMPACT. MRI staging can preoperatively identify patients at high BCR risk, helping to inform early clinical decision-making. TRIAL REGISTRATION. ClinicalTrials.gov NCT00026884 and NCT02594202.


Assuntos
Próstata , Neoplasias da Próstata , Masculino , Humanos , Pessoa de Meia-Idade , Próstata/patologia , Glândulas Seminais/patologia , Estudos Retrospectivos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Prostatectomia/métodos , Antígeno Prostático Específico , Imageamento por Ressonância Magnética , Recidiva Local de Neoplasia/patologia , Estadiamento de Neoplasias
13.
Med Phys ; 50(8): 5020-5029, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36855860

RESUMO

BACKGROUND: von Hippel-Lindau syndrome (VHL) is an autosomal dominant hereditary syndrome with an increased predisposition of developing numerous cysts and tumors, almost exclusively clear cell renal cell carcinoma (ccRCC). Considering the lifelong surveillance in such patients to monitor the disease, patients with VHL are preferentially imaged using MRI to eliminate radiation exposure. PURPOSE: Segmentation of kidney and tumor structures on MRI in VHL patients is useful in lesion characterization (e.g., cyst vs. tumor), volumetric lesion analysis, and tumor growth prediction. However, automated tasks such as ccRCC segmentation on MRI is sparsely studied. We develop segmentation methodology for ccRCC on T1 weighted precontrast, corticomedullary, nephrogenic, and excretory contrast phase MRI. METHODS: We applied a new neural network approache using a novel differentiable decision forest, called hinge forest (HF), to segment kidney parenchyma, cyst, and ccRCC tumors in 117 images from 115 patients. This data set represented an unprecedented 504 ccRCCs with 1171 cystic lesions obtained at five different MRI scanners. The HF architecture was compared with U-Net on 10 randomized splits with 75% used for training and 25% used for testing. Both methods were trained with Adam using default parameters ( α = 0.001 , ß 1 = 0.9 , ß 2 = 0.999 $\alpha = 0.001,\ \beta _1 = 0.9,\ \beta _2 = 0.999$ ) over 1000 epochs. We further demonstrated some interpretability of our HF method by exploiting decision tree structure. RESULTS: The HF achieved an average kidney, cyst, and tumor Dice similarity coefficient (DSC) of 0.75 ± 0.03, 0.44 ± 0.05, 0.53 ± 0.04, respectively, while U-Net achieved an average kidney, cyst, and tumor DSC of 0.78 ± 0.02, 0.41 ± 0.04, 0.46 ± 0.05, respectively. The HF significantly outperformed U-Net on tumors while U-Net significantly outperformed HF when segmenting kidney parenchymas ( α < 0.01 $\alpha < 0.01$ ). CONCLUSIONS: For the task of ccRCC segmentation, the HF can offer better segmentation performance compared to the traditional U-Net architecture. The leaf maps can glean hints about deep learning features that might prove to be useful in other automated tasks such as tumor characterization.


Assuntos
Carcinoma de Células Renais , Carcinoma , Cistos , Aprendizado Profundo , Neoplasias Renais , Humanos , Carcinoma de Células Renais/diagnóstico por imagem , Imageamento por Ressonância Magnética , Neoplasias Renais/diagnóstico por imagem
14.
ArXiv ; 2023 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-36789136

RESUMO

We demonstrate automated segmentation of clear cell renal cell carcinomas (ccRCC), cysts, and surrounding normal kidney parenchyma in patients with von Hippel-Lindau (VHL) syndrome using convolutional neural networks (CNN) on Magnetic Resonance Imaging (MRI). We queried 115 VHL patients and 117 scans (3 patients have two separate scans) with 504 ccRCCs and 1171 cysts from 2015 to 2021. Lesions were manually segmented on T1 excretory phase, co-registered on all contrast-enhanced T1 sequences and used to train 2D and 3D U-Net. The U-Net performance was evaluated on 10 randomized splits of the cohort. The models were evaluated using the dice similarity coefficient (DSC). Our 2D U-Net achieved an average ccRCC lesion detection Area under the curve (AUC) of 0.88 and DSC scores of 0.78, 0.40, and 0.46 for segmentation of the kidney, cysts, and tumors, respectively. Our 3D U-Net achieved an average ccRCC lesion detection AUC of 0.79 and DSC scores of 0.67, 0.32, and 0.34 for kidney, cysts, and tumors, respectively. We demonstrated good detection and moderate segmentation results using U-Net for ccRCC on MRI. Automatic detection and segmentation of normal renal parenchyma, cysts, and masses may assist radiologists in quantifying the burden of disease in patients with VHL.

15.
J Am Coll Radiol ; 20(2): 134-145, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35922018

RESUMO

OBJECTIVE: To determine the rigor, generalizability, and reproducibility of published classification and detection artificial intelligence (AI) models for prostate cancer (PCa) on MRI using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines, a 42-item checklist that is considered a measure of best practice for presenting and reviewing medical imaging AI research. MATERIALS AND METHODS: This review searched English literature for studies proposing PCa AI detection and classification models on MRI. Each study was evaluated with the CLAIM checklist. The additional outcomes for which data were sought included measures of AI model performance (eg, area under the curve [AUC], sensitivity, specificity, free-response operating characteristic curves), training and validation and testing group sample size, AI approach, detection versus classification AI, public data set utilization, MRI sequences used, and definition of gold standard for ground truth. The percentage of CLAIM checklist fulfillment was used to stratify studies into quartiles. Wilcoxon's rank-sum test was used for pair-wise comparisons. RESULTS: In all, 75 studies were identified, and 53 studies qualified for analysis. The original CLAIM items that most studies did not fulfill includes item 12 (77% no): de-identification methods; item 13 (68% no): handling missing data; item 15 (47% no): rationale for choosing ground truth reference standard; item 18 (55% no): measurements of inter- and intrareader variability; item 31 (60% no): inclusion of validated interpretability maps; item 37 (92% no): inclusion of failure analysis to elucidate AI model weaknesses. An AUC score versus percentage CLAIM fulfillment quartile revealed a significant difference of the mean AUC scores between quartile 1 versus quartile 2 (0.78 versus 0.86, P = .034) and quartile 1 versus quartile 4 (0.78 versus 0.89, P = .003) scores. Based on additional information and outcome metrics gathered in this study, additional measures of best practice are defined. These new items include disclosure of public dataset usage, ground truth definition in comparison to other referenced works in the defined task, and sample size power calculation. CONCLUSION: A large proportion of AI studies do not fulfill key items in CLAIM guidelines within their methods and results sections. The percentage of CLAIM checklist fulfillment is weakly associated with improved AI model performance. Additions or supplementations to CLAIM are recommended to improve publishing standards and aid reviewers in determining study rigor.


Assuntos
Inteligência Artificial , Próstata , Masculino , Humanos , Lista de Checagem , Reprodutibilidade dos Testes , Algoritmos , Imageamento por Ressonância Magnética
16.
Abdom Radiol (NY) ; 47(10): 3554-3562, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35869307

RESUMO

PURPOSE: Upfront knowledge of tumor growth rates of clear cell renal cell carcinoma in von Hippel-Lindau syndrome (VHL) patients can allow for a more personalized approach to either surveillance imaging frequency or surgical planning. In this study, we implement a machine learning algorithm utilizing radiomic features of renal tumors identified on baseline magnetic resonance imaging (MRI) in VHL patients to predict the volumetric growth rate category of these tumors. MATERIALS AND METHODS: A total of 73 VHL patients with 173 pathologically confirmed Clear Cell Renal Cell Carcinoma (ccRCCs) underwent MRI at least at two different time points between 2015 and 2021. Each tumor was manually segmented in excretory phase contrast T1 weighed MRI and co-registered on pre-contrast, corticomedullary and nephrographic phases. Radiomic features and volumetric data from each tumor were extracted using the PyRadiomics library in Python (4544 total features). Tumor doubling time (DT) was calculated and patients were divided into two groups: DT < = 1 year and DT > 1 year. Random forest classifier (RFC) was used to predict the DT category. To measure prediction performance, the cohort was randomly divided into 100 training and test sets (80% and 20%). Model performance was evaluated using area under curve of receiver operating characteristic curve (AUC-ROC), as well as accuracy, F1, precision and recall, reported as percentages with 95% confidence intervals (CIs). RESULTS: The average age of patients was 47.2 ± 10.3 years. Mean interval between MRIs for each patient was 1.3 years. Tumors included in this study were categorized into 155 Grade 2; 16 Grade 3; and 2 Grade 4. Mean accuracy of RFC model was 79.0% [67.4-90.6] and mean AUC-ROC of 0.795 [0.608-0.988]. The accuracy for predicting DT classes was not different among the MRI sequences (P-value = 0.56). CONCLUSION: Here we demonstrate the utility of machine learning in accurately predicting the renal tumor growth rate category of VHL patients based on radiomic features extracted from different T1-weighted pre- and post-contrast MRI sequences.


Assuntos
Carcinoma de Células Renais , Neoplasias Renais , Doença de von Hippel-Lindau , Adulto , Carcinoma de Células Renais/diagnóstico por imagem , Carcinoma de Células Renais/patologia , Humanos , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/patologia , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Estudos Retrospectivos , Doença de von Hippel-Lindau/complicações , Doença de von Hippel-Lindau/diagnóstico por imagem
17.
J Pathol Inform ; 13: 100007, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35242446

RESUMO

BACKGROUND: Mouse models are highly effective for studying the pathophysiology of lung adenocarcinoma and evaluating new treatment strategies. Treatment efficacy is primarily determined by the total tumor burden measured on excised tumor specimens. The measurement process is time-consuming and prone to human errors. To address this issue, we developed a novel deep learning model to segment lung tumor foci on digitally scanned hematoxylin and eosin (H&E) histology slides. METHODS: Digital slides of 239 mice from 9 experimental cohorts were split into training (n=137), validation (n=37), and testing cohorts (n=65). Image patches of 500×500 pixels were extracted from 5× and 10× magnifications, along with binary masks of expert annotations representing ground-truth tumor regions. Deep learning models utilizing DeepLabV3+ and UNet architectures were trained for binary segmentation of tumor foci under varying stain normalization conditions. The performance of algorithm segmentation was assessed by Dice Coefficient, and detection was evaluated by sensitivity and positive-predictive value (PPV). RESULTS: The best model on patch-based validation was DeepLabV3+ using a Resnet-50 backbone, which achieved Dice 0.890 and 0.873 on validation and testing cohort, respectively. This result corresponded to 91.3 Sensitivity and 51.0 PPV in the validation cohort and 93.7 Sensitivity and 51.4 PPV in the testing cohort. False positives could be reduced 10-fold with thresholding artificial intelligence (AI) predicted output by area, without negative impact on Dice Coefficient. Evaluation at various stain normalization strategies did not demonstrate improvement from the baseline model. CONCLUSIONS: A robust AI-based algorithm for detecting and segmenting lung tumor foci in the pre-clinical mouse models was developed. The output of this algorithm is compatible with open-source software that researchers commonly use.

18.
Abdom Radiol (NY) ; 47(4): 1425-1434, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35099572

RESUMO

PURPOSE: To present fully automated DL-based prostate cancer detection system for prostate MRI. METHODS: MRI scans from two institutions, were used for algorithm training, validation, testing. MRI-visible lesions were contoured by an experienced radiologist. All lesions were biopsied using MRI-TRUS-guidance. Lesions masks, histopathological results were used as ground truth labels to train UNet, AH-Net architectures for prostate cancer lesion detection, segmentation. Algorithm was trained to detect any prostate cancer ≥ ISUP1. Detection sensitivity, positive predictive values, mean number of false positive lesions per patient were used as performance metrics. RESULTS: 525 patients were included for training, validation, testing of the algorithm. Dataset was split into training (n = 368, 70%), validation (n = 79, 15%), test (n = 78, 15%) cohorts. Dice coefficients in training, validation sets were 0.403, 0.307, respectively, for AHNet model compared to 0.372, 0.287, respectively, for UNet model. In validation set, detection sensitivity was 70.9%, PPV was 35.5%, mean number of false positive lesions/patient was 1.41 (range 0-6) for UNet model compared to 74.4% detection sensitivity, 47.8% PPV, mean number of false positive lesions/patient was 0.87 (range 0-5) for AHNet model. In test set, detection sensitivity for UNet was 72.8% compared to 63.0% for AHNet, mean number of false positive lesions/patient was 1.90 (range 0-7), 1.40 (range 0-6) in UNet, AHNet models, respectively. CONCLUSION: We developed a DL-based AI approach which predicts prostate cancer lesions at biparametric MRI with reasonable performance metrics. While false positive lesion calls remain as a challenge of AI-assisted detection algorithms, this system can be utilized as an adjunct tool by radiologists.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Inteligência Artificial , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
19.
Acad Radiol ; 29(8): 1159-1168, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-34598869

RESUMO

RATIONALE AND OBJECTIVES: Prostate MRI improves detection of clinically significant prostate cancer; however, its diagnostic performance has wide variation. Artificial intelligence (AI) has the potential to assist radiologists in the detection and classification of prostatic lesions. Herein, we aimed to develop and test a cascaded deep learning detection and classification system trained on biparametric prostate MRI using PI-RADS for assisting radiologists during prostate MRI read out. MATERIALS AND METHODS: T2-weighted, diffusion-weighted (ADC maps, high b value DWI) MRI scans obtained at 3 Tesla from two institutions (n = 1043 in-house and n = 347 Prostate-X, respectively) acquired between 2015 to 2019 were used for model training, validation, testing. All scans were retrospectively reevaluated by one radiologist. Suspicious lesions were contoured and assigned a PI-RADS category. A 3D U-Net-based deep neural network was used to train an algorithm for automated detection and segmentation of prostate MRI lesions. Two 3D residual neural network were used for a 4-class classification task to predict PI-RADS categories 2 to 5 and BPH. Training and validation used 89% (n = 1290 scans) of the data using 5 fold cross-validation, the remaining 11% (n = 150 scans) were used for independent testing. Algorithm performance at lesion level was assessed using sensitivities, positive predictive values (PPV), false discovery rates (FDR), classification accuracy, Dice similarity coefficient (DSC). Additional analysis was conducted to compare AI algorithm's lesion detection performance with targeted biopsy results. RESULTS: Median age was 66 years (IQR = 60-71), PSA 6.7 ng/ml (IQR = 4.7-9.9) from in-house cohort. In the independent test set, algorithm correctly detected 111 of 198 lesions leading to 56.1% (49.3%-62.6%) sensitivity. PPV was 62.7% (95% CI 54.7%-70.7%) with FDR of 37.3% (95% CI 29.3%-45.3%). Of 79 true positive lesions, 82.3% were tumor positive at targeted biopsy, whereas of 57 false negative lesions, 50.9% were benign at targeted biopsy. Median DSC for lesion segmentation was 0.359. Overall PI-RADS classification accuracy was 30.8% (95% CI 24.6%-37.8%). CONCLUSION: Our cascaded U-Net, residual network architecture can detect, classify cancer suspicious lesions at prostate MRI with good detection, reasonable classification performance metrics.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Idoso , Algoritmos , Inteligência Artificial , Humanos , Imageamento por Ressonância Magnética , Masculino , Próstata/diagnóstico por imagem , Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Estudos Retrospectivos
20.
IEEE Access ; 9: 87531-87542, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34733603

RESUMO

In this study, we formulated an efficient deep learning-based classification strategy for characterizing metastatic bone lesions using computed tomography scans (CTs) of prostate cancer patients. For this purpose, 2,880 annotated bone lesions from CT scans of 114 patients diagnosed with prostate cancer were used for training, validation, and final evaluation. These annotations were in the form of lesion full segmentation, lesion type and labels of either benign or malignant. In this work, we present our approach in developing the state-of-the-art model to classify bone lesions as benign or malignant, where (1) we introduce a valuable dataset to address a clinically important problem, (2) we increase the reliability of our model by patient-level stratification of our dataset following lesion-aware distribution at each of the training, validation, and test splits, (3) we explore the impact of lesion texture, morphology, size, location, and volumetric information on the classification performance, (4) we investigate the functionality of lesion classification using different algorithms including lesion-based average 2D ResNet-50, lesion-based average 2D ResNeXt-50, 3D ResNet-18, 3D ResNet-50, as well as the ensemble of 2D ResNet-50 and 3D ResNet-18. For this purpose, we employed a train/validation/test split equal to 75%/12%/13% with several data augmentation methods applied to the training dataset to avoid overfitting and to increase reliability. We achieved an accuracy of 92.2% for correct classification of benign vs. malignant bone lesions in the test set using an ensemble of lesion-based average 2D ResNet-50 and 3D ResNet-18 with texture, volumetric information, and morphology having the greatest discriminative power respectively. To the best of our knowledge, this is the highest ever achieved lesion-level accuracy having a very comprehensive data set for such a clinically important problem. This level of classification performance in the early stages of metastasis development bodes well for clinical translation of this strategy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...