Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Lancet Oncol ; 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38876123

RESUMO

BACKGROUND: Artificial intelligence (AI) systems can potentially aid the diagnostic pathway of prostate cancer by alleviating the increasing workload, preventing overdiagnosis, and reducing the dependence on experienced radiologists. We aimed to investigate the performance of AI systems at detecting clinically significant prostate cancer on MRI in comparison with radiologists using the Prostate Imaging-Reporting and Data System version 2.1 (PI-RADS 2.1) and the standard of care in multidisciplinary routine practice at scale. METHODS: In this international, paired, non-inferiority, confirmatory study, we trained and externally validated an AI system (developed within an international consortium) for detecting Gleason grade group 2 or greater cancers using a retrospective cohort of 10 207 MRI examinations from 9129 patients. Of these examinations, 9207 cases from three centres (11 sites) based in the Netherlands were used for training and tuning, and 1000 cases from four centres (12 sites) based in the Netherlands and Norway were used for testing. In parallel, we facilitated a multireader, multicase observer study with 62 radiologists (45 centres in 20 countries; median 7 [IQR 5-10] years of experience in reading prostate MRI) using PI-RADS (2.1) on 400 paired MRI examinations from the testing cohort. Primary endpoints were the sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC) of the AI system in comparison with that of all readers using PI-RADS (2.1) and in comparison with that of the historical radiology readings made during multidisciplinary routine practice (ie, the standard of care with the aid of patient history and peer consultation). Histopathology and at least 3 years (median 5 [IQR 4-6] years) of follow-up were used to establish the reference standard. The statistical analysis plan was prespecified with a primary hypothesis of non-inferiority (considering a margin of 0·05) and a secondary hypothesis of superiority towards the AI system, if non-inferiority was confirmed. This study was registered at ClinicalTrials.gov, NCT05489341. FINDINGS: Of the 10 207 examinations included from Jan 1, 2012, through Dec 31, 2021, 2440 cases had histologically confirmed Gleason grade group 2 or greater prostate cancer. In the subset of 400 testing cases in which the AI system was compared with the radiologists participating in the reader study, the AI system showed a statistically superior and non-inferior AUROC of 0·91 (95% CI 0·87-0·94; p<0·0001), in comparison to the pool of 62 radiologists with an AUROC of 0·86 (0·83-0·89), with a lower boundary of the two-sided 95% Wald CI for the difference in AUROC of 0·02. At the mean PI-RADS 3 or greater operating point of all readers, the AI system detected 6·8% more cases with Gleason grade group 2 or greater cancers at the same specificity (57·7%, 95% CI 51·6-63·3), or 50·4% fewer false-positive results and 20·0% fewer cases with Gleason grade group 1 cancers at the same sensitivity (89·4%, 95% CI 85·3-92·9). In all 1000 testing cases where the AI system was compared with the radiology readings made during multidisciplinary practice, non-inferiority was not confirmed, as the AI system showed lower specificity (68·9% [95% CI 65·3-72·4] vs 69·0% [65·5-72·5]) at the same sensitivity (96·1%, 94·0-98·2) as the PI-RADS 3 or greater operating point. The lower boundary of the two-sided 95% Wald CI for the difference in specificity (-0·04) was greater than the non-inferiority margin (-0·05) and a p value below the significance threshold was reached (p<0·001). INTERPRETATION: An AI system was superior to radiologists using PI-RADS (2.1), on average, at detecting clinically significant prostate cancer and comparable to the standard of care. Such a system shows the potential to be a supportive tool within a primary diagnostic setting, with several associated benefits for patients and radiologists. Prospective validation is needed to test clinical applicability of this system. FUNDING: Health~Holland and EU Horizon 2020.

2.
Neurospine ; 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38768945

RESUMO

Objective: Readmission rates after posterior cervical fusion (PCF) significantly impact patients and healthcare, with complication rates at 15%-5% and up to 12% 90-day readmission rates. In this study, we aim to test whether machine learning (ML) models that capture interfactorial interactions outperform traditional logistic regression (LR) in identifying readmission-associated factors. Methods: The Optum Clinformatics Data Mart database was used to identify patients who underwent PCF between 2004-2017. To determine factors associated with 30-day readmissions, 5 ML models were generated and evaluated, including a multivariate LR (MLR) model. Then, the best-performing model, Gradient Boosting Machine (GBM), was compared to the LACE (Length patient stay in the hospital, Acuity of admission of patient in the hospital, Comorbidity, and Emergency visit) index regarding potential cost savings from algorithm implementation. Results: This study included 4,130 patients, 874 of which were readmitted within 30 days. When analyzed and scaled, we found that patient discharge status, comorbidities, and number of procedure codes were factors that influenced MLR, while patient discharge status, billed admission charge, and length of stay influenced the GBM model. The GBM model significantly outperformed MLR in predicting unplanned readmissions (mean area under the receiver operating characteristic curve, 0.846 vs. 0.829; p<0.001), while also projecting an average cost savings of 50% more than the LACE index. Conclusion: Five models (GBM, XGBoost [extreme gradient boosting], RF [random forest], LASSO [least absolute shrinkage and selection operator], and MLR) were evaluated, among which, the GBM model exhibited superior predictive performance, robustness, and accuracy. Factors associated with readmissions impact LR and GBM models differently, suggesting that these models can be used complementarily. When analyzing PCF procedures, the GBM model resulted in greater predictive performance and was associated with higher theoretical cost savings for readmissions associated with PCF complications.

3.
Comput Biol Med ; 173: 108318, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38522253

RESUMO

Image registration can map the ground truth extent of prostate cancer from histopathology images onto MRI, facilitating the development of machine learning methods for early prostate cancer detection. Here, we present RAdiology PatHology Image Alignment (RAPHIA), an end-to-end pipeline for efficient and accurate registration of MRI and histopathology images. RAPHIA automates several time-consuming manual steps in existing approaches including prostate segmentation, estimation of the rotation angle and horizontal flipping in histopathology images, and estimation of MRI-histopathology slice correspondences. By utilizing deep learning registration networks, RAPHIA substantially reduces computational time. Furthermore, RAPHIA obviates the need for a multimodal image similarity metric by transferring histopathology image representations to MRI image representations and vice versa. With the assistance of RAPHIA, novice users achieved expert-level performance, and their mean error in estimating histopathology rotation angle was reduced by 51% (12 degrees vs 8 degrees), their mean accuracy of estimating histopathology flipping was increased by 5% (95.3% vs 100%), and their mean error in estimating MRI-histopathology slice correspondences was reduced by 45% (1.12 slices vs 0.62 slices). When compared to a recent conventional registration approach and a deep learning registration approach, RAPHIA achieved better mapping of histopathology cancer labels, with an improved mean Dice coefficient of cancer regions outlined on MRI and the deformed histopathology (0.44 vs 0.48 vs 0.50), and a reduced mean per-case processing time (51 vs 11 vs 4.5 min). The improved performance by RAPHIA allows efficient processing of large datasets for the development of machine learning models for prostate cancer detection on MRI. Our code is publicly available at: https://github.com/pimed/RAPHIA.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Radiologia , Masculino , Humanos , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
Eur Urol Open Sci ; 54: 20-27, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37545845

RESUMO

Background: Magnetic resonance imaging (MRI) underestimation of prostate cancer extent complicates the definition of focal treatment margins. Objective: To validate focal treatment margins produced by an artificial intelligence (AI) model. Design setting and participants: Testing was conducted retrospectively in an independent dataset of 50 consecutive patients who had radical prostatectomy for intermediate-risk cancer. An AI deep learning model incorporated multimodal imaging and biopsy data to produce three-dimensional cancer estimation maps and margins. AI margins were compared with conventional MRI regions of interest (ROIs), 10-mm margins around ROIs, and hemigland margins. The AI model also furnished predictions of negative surgical margin probability, which were assessed for accuracy. Outcome measurements and statistical analysis: Comparing AI with conventional margins, sensitivity was evaluated using Wilcoxon signed-rank tests and negative margin rates using chi-square tests. Predicted versus observed negative margin probability was assessed using linear regression. Clinically significant prostate cancer (International Society of Urological Pathology grade ≥2) delineated on whole-mount histopathology served as ground truth. Results and limitations: The mean sensitivity for cancer-bearing voxels was higher for AI margins (97%) than for conventional ROIs (37%, p < 0.001), 10-mm ROI margins (93%, p = 0.24), and hemigland margins (94%, p < 0.001). For index lesions, AI margins were more often negative (90%) than conventional ROIs (0%, p < 0.001), 10-mm ROI margins (82%, p = 0.24), and hemigland margins (66%, p = 0.004). Predicted and observed negative margin probabilities were strongly correlated (R2 = 0.98, median error = 4%). Limitations include a validation dataset derived from a single institution's prostatectomy population. Conclusions: The AI model was accurate and effective in an independent test set. This approach could improve and standardize treatment margin definition, potentially reducing cancer recurrence rates. Furthermore, an accurate assessment of negative margin probability could facilitate informed decision-making for patients and physicians. Patient summary: Artificial intelligence was used to predict the extent of tumors in surgically removed prostate specimens. It predicted tumor margins more accurately than conventional methods.

5.
Spine (Phila Pa 1976) ; 48(17): 1224-1233, 2023 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-37027190

RESUMO

STUDY DESIGN: A retrospective cohort study. OBJECTIVE: To identify the factors associated with readmissions after PLF using machine learning and logistic regression (LR) models. SUMMARY OF BACKGROUND DATA: Readmissions after posterior lumbar fusion (PLF) place significant health and financial burden on the patient and overall health care system. MATERIALS AND METHODS: The Optum Clinformatics Data Mart database was used to identify patients who underwent posterior lumbar laminectomy, fusion, and instrumentation between 2004 and 2017. Four machine learning models and a multivariable LR model were used to assess factors most closely associated with 30-day readmission. These models were also evaluated in terms of ability to predict unplanned 30-day readmissions. The top-performing model (Gradient Boosting Machine; GBM) was then compared with the validated LACE index in terms of potential cost savings associated with the implementation of the model. RESULTS: A total of 18,981 patients were included, of which 3080 (16.2%) were readmitted within 30 days of initial admission. Discharge status, prior admission, and geographic division were most influential for the LR model, whereas discharge status, length of stay, and prior admissions had the greatest relevance for the GBM model. GBM outperformed LR in predicting unplanned 30-day readmission (mean area under the receiver operating characteristic curve 0.865 vs. 0.850, P <0.0001). The use of GBM also achieved a projected 80% decrease in readmission-associated costs relative to those achieved by the LACE index model. CONCLUSIONS: The factors associated with readmission vary in terms of predictive influence based on standard LR and machine learning models used, highlighting the complementary roles these models have in identifying relevant factors for the prediction of 30-day readmissions. For PLF procedures, GBM yielded the greatest predictive ability and associated cost savings for readmission. LEVEL OF EVIDENCE: 3.


Assuntos
Hospitalização , Readmissão do Paciente , Humanos , Estudos Retrospectivos , Fatores de Risco , Aprendizado de Máquina
6.
IEEE Trans Med Imaging ; 42(3): 697-712, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36264729

RESUMO

Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.


Assuntos
Cavidade Abdominal , Aprendizado Profundo , Humanos , Algoritmos , Encéfalo/diagnóstico por imagem , Abdome/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
7.
Eur Urol Focus ; 9(4): 584-591, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36372735

RESUMO

BACKGROUND: Tissue preservation strategies have been increasingly used for the management of localized prostate cancer. Focal ablation using ultrasound-guided high-intensity focused ultrasound (HIFU) has demonstrated promising short and medium-term oncological outcomes. Advancements in HIFU therapy such as the introduction of tissue change monitoring (TCM) aim to further improve treatment efficacy. OBJECTIVE: To evaluate the association between intraoperative TCM during HIFU focal therapy for localized prostate cancer and oncological outcomes 12 mo afterward. DESIGN, SETTING, AND PARTICIPANTS: Seventy consecutive men at a single institution with prostate cancer were prospectively enrolled. Men with prior treatment, metastases, or pelvic radiation were excluded to obtain a final cohort of 55 men. INTERVENTION: All men underwent HIFU focal therapy followed by magnetic resonance (MR)-fusion biopsy 12 mo later. Tissue change was quantified intraoperatively by measuring the backscatter of ultrasound waves during ablation. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Gleason grade group (GG) ≥2 cancer on postablation biopsy was the primary outcome. Secondary outcomes included GG ≥1 cancer, Prostate Imaging Reporting and Data System (PI-RADS) scores ≥3, and evidence of tissue destruction on post-treatment magnetic resonance imaging (MRI). A Student's t - test analysis was performed to evaluate the mean TCM scores and efficacy of ablation measured by histopathology. Multivariate logistic regression was also performed to identify the odds of residual cancer for each unit increase in the TCM score. RESULTS AND LIMITATIONS: A lower mean TCM score within the region of the tumor (0.70 vs 0.97, p = 0.02) was associated with the presence of persistent GG ≥2 cancer after HIFU treatment. Adjusting for initial prostate-specific antigen, PI-RADS score, Gleason GG, positive cores, and age, each incremental increase of TCM was associated with an 89% reduction in the odds (odds ratio: 0.11, confidence interval: 0.01-0.97) of having residual GG ≥2 cancer on postablation biopsy. Men with higher mean TCM scores (0.99 vs 0.72, p = 0.02) at the time of treatment were less likely to have abnormal MRI (PI-RADS ≥3) at 12 mo postoperatively. Cases with high TCM scores also had greater tissue destruction measured on MRI and fewer visible lesions on postablation MRI. CONCLUSIONS: Tissue change measured using TCM values during focal HIFU of the prostate was associated with histopathology and radiological outcomes 12 mo after the procedure. PATIENT SUMMARY: In this report, we looked at how well ultrasound changes of the prostate during focal high-intensity focused ultrasound (HIFU) therapy for the treatment of prostate cancer predict patient outcomes. We found that greater tissue change measured by the HIFU device was associated with less residual cancer at 1 yr. This tool should be used to ensure optimal ablation of the cancer and may improve focal therapy outcomes in the future.


Assuntos
Tratamento por Ondas de Choque Extracorpóreas , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Imageamento por Ressonância Magnética/métodos , Neoplasia Residual , Resultado do Tratamento , Biópsia Guiada por Imagem
8.
Ther Adv Urol ; 14: 17562872221128791, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36249889

RESUMO

A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.

9.
Med Image Anal ; 82: 102620, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36148705

RESUMO

Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.


Assuntos
Redes Neurais de Computação , Próstata , Humanos , Masculino , Próstata/diagnóstico por imagem , Ultrassonografia , Imageamento por Ressonância Magnética/métodos , Pelve
10.
Urol Oncol ; 40(11): 489.e9-489.e17, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36058811

RESUMO

PURPOSE: To evaluate the performance of multiparametric magnetic resonance imaging (mpMRI) and PSA testing in follow-up after high intensity focused ultrasound (HIFU) focal therapy for localized prostate cancer. METHODS: A total of 73 men with localized prostate cancer were prospectively enrolled and underwent focal HIFU followed by per-protocol PSA and mpMRI with systematic plus targeted biopsies at 12 months after treatment. We evaluated the association between post-treatment mpMRI and PSA with disease persistence on the post-ablation biopsy. We also assessed post-treatment functional and oncological outcomes. RESULTS: Median age was 69 years (Interquartile Range (IQR): 66-74) and median PSA was 6.9 ng/dL (IQR: 5.3-9.9). Of 19 men with persistent GG ≥ 2 disease, 58% (11 men) had no visible lesions on MRI. In the 14 men with PIRADS 4 or 5 lesions, 7 (50%) had either no cancer or GG 1 cancer at biopsy. Men with false negative mpMRI findings had higher PSA density (0.16 vs. 0.07 ng/mL2, P = 0.01). No change occurred in the mean Sexual Health Inventory for Men (SHIM) survey scores (17.0 at baseline vs. 17.7 post-treatment, P = 0.75) or International Prostate Symptom Score (IPSS) (8.1 at baseline vs. 7.7 at 24 months, P = 0.81) after treatment. CONCLUSIONS: Persistent GG ≥ 2 cancer may occur after focal HIFU. mpMRI alone without confirmatory biopsy may be insufficient to rule out residual cancer, especially in patients with higher PSA density. Our study also validates previously published studies demonstrating preservation of urinary and sexual function after HIFU treatment.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Idoso , Próstata/patologia , Antígeno Prostático Específico , Neoplasia Residual , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Progressão da Doença
11.
Brief Bioinform ; 23(5)2022 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-35947990

RESUMO

Liquid chromatography-mass spectrometry (LC-MS)-based untargeted metabolomics provides systematic profiling of metabolic. Yet, its applications in precision medicine (disease diagnosis) have been limited by several challenges, including metabolite identification, information loss and low reproducibility. Here, we present the deep-learning-based Pseudo-Mass Spectrometry Imaging (deepPseudoMSI) project (https://www.deeppseudomsi.org/), which converts LC-MS raw data to pseudo-MS images and then processes them by deep learning for precision medicine, such as disease diagnosis. Extensive tests based on real data demonstrated the superiority of deepPseudoMSI over traditional approaches and the capacity of our method to achieve an accurate individualized diagnosis. Our framework lays the foundation for future metabolic-based precision medicine.


Assuntos
Aprendizado Profundo , Cromatografia Líquida/métodos , Espectrometria de Massas/métodos , Metabolômica/métodos , Medicina de Precisão , Reprodutibilidade dos Testes
12.
Cancers (Basel) ; 14(12)2022 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-35740487

RESUMO

The localization of extraprostatic extension (EPE), i.e., local spread of prostate cancer beyond the prostate capsular boundary, is important for risk stratification and surgical planning. However, the sensitivity of EPE detection by radiologists on MRI is low (57% on average). In this paper, we propose a method for computational detection of EPE on multiparametric MRI using deep learning. Ground truth labels of cancers and EPE were obtained in 123 patients (38 with EPE) by registering pre-surgical MRI with whole-mount digital histopathology images from radical prostatectomy. Our approach has two stages. First, we trained deep learning models using the MRI as input to generate cancer probability maps both inside and outside the prostate. Second, we built an image post-processing pipeline that generates predictions for EPE location based on the cancer probability maps and clinical knowledge. We used five-fold cross-validation to train our approach using data from 74 patients and tested it using data from an independent set of 49 patients. We compared two deep learning models for cancer detection: (i) UNet and (ii) the Correlated Signature Network for Indolent and Aggressive prostate cancer detection (CorrSigNIA). The best end-to-end model for EPE detection, which we call EPENet, was based on the CorrSigNIA cancer detection model. EPENet was successful at detecting cancers with extraprostatic extension, achieving a mean area under the receiver operator characteristic curve of 0.72 at the patient-level. On the test set, EPENet had 80.0% sensitivity and 28.2% specificity at the patient-level compared to 50.0% sensitivity and 76.9% specificity for the radiologists. To account for spatial location of predictions during evaluation, we also computed results at the sextant-level, where the prostate was divided into sextants according to standard systematic 12-core biopsy procedure. At the sextant-level, EPENet achieved mean sensitivity 61.1% and mean specificity 58.3%. Our approach has the potential to provide the location of extraprostatic extension using MRI alone, thus serving as an independent diagnostic aid to radiologists and facilitating treatment planning.

13.
Med Phys ; 49(8): 5160-5181, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35633505

RESUMO

BACKGROUND: Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, magnetic resonance imaging (MRI) is considered the most sensitive non-invasive imaging modality that enables visualization, detection, and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter-reader agreements. PURPOSE: Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI. METHODS: Four different deep learning models (SPCNet, U-Net, branched U-Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology-confirmed radiologist labels, pathologist labels on whole-mount histopathology images, and lesion-level and pixel-level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel-level Gleason patterns) on whole-mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre-operative MRI using an automated MRI-histopathology registration platform. RESULTS: Radiologist labels missed cancers (ROC-AUC: 0.75-0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24-0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC-AUC: 0.97-1, lesion Dice: 0.75-0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC-AUC: 0.91-0.94), and had generalizable and comparable performance to pathologist label-trained-models in the targeted biopsy cohort (aggressive lesion ROC-AUC: 0.87-0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel-level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human-annotated label type. CONCLUSIONS: Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label-trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter- and intra-reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI.


Assuntos
Neoplasias da Próstata , Radiologia , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Masculino , Próstata/diagnóstico por imagem , Próstata/patologia , Prostatectomia , Neoplasias da Próstata/patologia
14.
J Nucl Med ; 63(12): 1829-1835, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35552245

RESUMO

68Ga-RM2 targets gastrin-releasing peptide receptors (GRPRs), which are overexpressed in prostate cancer (PC). Here, we compared preoperative 68Ga-RM2 PET to postsurgery histopathology in patients with newly diagnosed intermediate- or high-risk PC. Methods: Forty-one men, 64.0 ± 6.7 y old, were prospectively enrolled. PET images were acquired 42-72 min (median ± SD, 52.5 ± 6.5 min) after injection of 118.4-247.9 MBq (median ± SD, 138.0 ± 22.2 MBq) of 68Ga-RM2. PET findings were compared with preoperative multiparametric MRI (mpMRI) (n = 36) and 68Ga-PSMA11 PET (n = 17) and correlated to postprostatectomy whole-mount histopathology (n = 32) and time to biochemical recurrence. Nine participants decided to undergo radiation therapy after study enrollment. Results: All participants had intermediate- (n = 17) or high-risk (n = 24) PC and were scheduled for prostatectomy. Prostate-specific antigen was 8.8 ± 77.4 (range, 2.5-504) and 7.6 ± 5.3 ng/mL (range, 2.5-28.0 ng/mL) when participants who ultimately underwent radiation treatment were excluded. Preoperative 68Ga-RM2 PET identified 70 intraprostatic foci of uptake in 40 of 41 patients. Postprostatectomy histopathology was available in 32 patients in which 68Ga-RM2 PET identified 50 of 54 intraprostatic lesions (detection rate = 93%). 68Ga-RM2 uptake was recorded in 19 nonenlarged pelvic lymph nodes in 6 patients. Pathology confirmed lymph node metastases in 16 lesions, and follow-up imaging confirmed nodal metastases in 2 lesions. 68Ga-PSMA11 and 68Ga-RM2 PET identified 27 and 26 intraprostatic lesions, respectively, and 5 pelvic lymph nodes each in 17 patients. Concordance between 68Ga-RM2 and 68Ga-PSMA11 PET was found in 18 prostatic lesions in 11 patients and 4 lymph nodes in 2 patients. Noncongruent findings were observed in 6 patients (intraprostatic lesions in 4 patients and nodal lesions in 2 patients). Sensitivity and accuracy rates for 68Ga-RM2 and 68Ga-PSMA11 (98% and 89% for 68Ga-RM2 and 95% and 89% for 68Ga-PSMA11) were higher than those for mpMRI (77% and 77%, respectively). Specificity was highest for mpMRI with 75% followed by 68Ga-PSMA11 (67%) and 68Ga-RM2 (65%). Conclusion: 68Ga-RM2 PET accurately detects intermediate- and high-risk primary PC, with a detection rate of 93%. In addition, 68Ga-RM2 PET showed significantly higher specificity and accuracy than mpMRI and a performance similar to 68Ga-PSMA11 PET. These findings need to be confirmed in larger studies to identify which patients will benefit from one or the other or both radiopharmaceuticals.


Assuntos
Radioisótopos de Gálio , Neoplasias da Próstata , Masculino , Humanos , Oligopeptídeos , Receptores da Bombesina , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Neoplasias da Próstata/patologia , Prostatectomia , Tomografia por Emissão de Pósitrons/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos
15.
Med Image Anal ; 78: 102427, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35344824

RESUMO

In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Ultrassonografia
16.
Med Image Anal ; 75: 102288, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34784540

RESUMO

Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Imageamento por Ressonância Magnética , Masculino , Próstata/diagnóstico por imagem , Prostatectomia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia
17.
Med Image Anal ; 72: 102140, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34214957

RESUMO

Pulmonary respiratory motion artifacts are common in four-dimensional computed tomography (4DCT) of lungs and are caused by missing, duplicated, and misaligned image data. This paper presents a geodesic density regression (GDR) algorithm to correct motion artifacts in 4DCT by correcting artifacts in one breathing phase with artifact-free data from corresponding regions of other breathing phases. The GDR algorithm estimates an artifact-free lung template image and a smooth, dense, 4D (space plus time) vector field that deforms the template image to each breathing phase to produce an artifact-free 4DCT scan. Correspondences are estimated by accounting for the local tissue density change associated with air entering and leaving the lungs, and using binary artifact masks to exclude regions with artifacts from image regression. The artifact-free lung template image is generated by mapping the artifact-free regions of each phase volume to a common reference coordinate system using the estimated correspondences and then averaging. This procedure generates a fixed view of the lung with an improved signal-to-noise ratio. The GDR algorithm was evaluated and compared to a state-of-the-art geodesic intensity regression (GIR) algorithm using simulated CT time-series and 4DCT scans with clinically observed motion artifacts. The simulation shows that the GDR algorithm has achieved significantly more accurate Jacobian images and sharper template images, and is less sensitive to data dropout than the GIR algorithm. We also demonstrate that the GDR algorithm is more effective than the GIR algorithm for removing clinically observed motion artifacts in treatment planning 4DCT scans. Our code is freely available at https://github.com/Wei-Shao-Reg/GDR.


Assuntos
Tomografia Computadorizada Quadridimensional , Neoplasias Pulmonares , Algoritmos , Artefatos , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Movimento (Física) , Respiração
18.
J Urol ; 206(3): 604-612, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33878887

RESUMO

PURPOSE: Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on magnetic resonance imaging (MRI) is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine magnetic resonance-ultrasound fusion biopsy in the clinic. MATERIALS AND METHODS: A total of 905 subjects underwent multiparametric MRI at 29 institutions, followed by magnetic resonance-ultrasound fusion biopsy at 1 institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to 2 deep learning networks (U-Net and holistically-nested edge detector) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests. RESULTS: ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), holistically-nested edge detector (DSC=0.80, p <0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file. CONCLUSIONS: This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urological clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico , Conjuntos de Dados como Assunto , Estudos de Viabilidade , Humanos , Biópsia Guiada por Imagem/métodos , Imagem por Ressonância Magnética Intervencionista , Masculino , Imagem Multimodal/métodos , Imageamento por Ressonância Magnética Multiparamétrica , Estudo de Prova de Conceito , Estudos Prospectivos , Próstata/patologia , Neoplasias da Próstata/patologia , Reprodutibilidade dos Testes , Estudos Retrospectivos , Software , Fatores de Tempo , Ultrassonografia de Intervenção/métodos
19.
Med Phys ; 48(6): 2960-2972, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33760269

RESUMO

PURPOSE: While multi-parametric magnetic resonance imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy. METHODS: We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtained by registering MRI with whole-mount digital histopathology images from patients who underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients who underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including six patients with normal MRI and no cancer, 23 patients who underwent radical prostatectomy, and 293 patients who underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists. RESULTS: Our model detected clinically significant lesions with an area under the receiver operator characteristics curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer. CONCLUSIONS: Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.


Assuntos
Neoplasias da Próstata , Humanos , Imageamento por Ressonância Magnética , Masculino , Gradação de Tumores , Prostatectomia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia
20.
Med Image Anal ; 69: 101957, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33550008

RESUMO

The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.


Assuntos
Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...