Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
medRxiv ; 2024 Jan 10.
Article in English | MEDLINE | ID: mdl-38260466

ABSTRACT

Purpose: The use of MRI-targeted biopsies has led to lower detection of Gleason Grade Group 1 (GG1) prostate cancer and increased detection of GG2 disease. Although this finding is generally attributed to improved sensitivity and specificity of MRI for aggressive cancers, it might also be explained by grade inflation. Our objective was to determine the likelihood of definitive treatment and risk of post-treatment recurrence for patients with GG2 cancer diagnosed using targeted biopsies relative to men with GG1 cancer diagnosed using systematic biopsies. Methods: We performed a retrospective study on a large tertiary centre registry (HUS Acamedic Datalake) to retrieve data on prostate cancer diagnosis, treatment, and cancer recurrence. We included patients with either GG1 with systematic biopsies (3317 men) or GG2 with targeted biopsies (554 men) from 1993 to 2019. We assessed the risk of curative treatment and recurrence after treatment. Kaplan-Meier survival curves were computed to assess treatment- and recurrence-free survival. Cox proportional hazards regression analysis was performed to assess the risk of posttreatment recurrence. Results: Patients with systematic biopsy detected GG1 cancer had a significantly longer median time-to-treatment (31 months) than those with targeted biopsy detected GG2 cancer (4 months, p<0.0001). The risk of recurrence after curative treatment was similar between groups with the upper bound of 95% CI, excluding an important difference (HR: 0.94, 95% CI [0.71-1.25], p=0.7). Conclusion: GG2 cancers detected by MRI-targeted biopsy are treated more aggressively than GG1 cancers detected by systematic biopsy, despite having similar oncologic risk. To prevent further overtreatment related to the MRI pathway, treatment guidelines from the pre-MRI era need to be updated to consider changes in the diagnostic pathway.

2.
Commun Med (Lond) ; 3(1): 139, 2023 Oct 06.
Article in English | MEDLINE | ID: mdl-37803172

ABSTRACT

BACKGROUND: Classifying samples in incomplete datasets is a common aim for machine learning practitioners, but is non-trivial. Missing data is found in most real-world datasets and these missing values are typically imputed using established methods, followed by classification of the now complete samples. The focus of the machine learning researcher is to optimise the classifier's performance. METHODS: We utilise three simulated and three real-world clinical datasets with different feature types and missingness patterns. Initially, we evaluate how the downstream classifier performance depends on the choice of classifier and imputation methods. We employ ANOVA to quantitatively evaluate how the choice of missingness rate, imputation method, and classifier method influences the performance. Additionally, we compare commonly used methods for assessing imputation quality and introduce a class of discrepancy scores based on the sliced Wasserstein distance. We also assess the stability of the imputations and the interpretability of model built on the imputed data. RESULTS: The performance of the classifier is most affected by the percentage of missingness in the test data, with a considerable performance decline observed as the test missingness rate increases. We also show that the commonly used measures for assessing imputation quality tend to lead to imputed data which poorly matches the underlying data distribution, whereas our new class of discrepancy scores performs much better on this measure. Furthermore, we show that the interpretability of classifier models trained using poorly imputed data is compromised. CONCLUSIONS: It is imperative to consider the quality of the imputation when performing downstream classification as the effects on the classifier can be considerable.


Many artificial intelligence (AI) methods aim to classify samples of data into groups, e.g., patients with disease vs. those without. This often requires datasets to be complete, i.e., that all data has been collected for all samples. However, in clinical practice this is often not the case and some data can be missing. One solution is to 'complete' the dataset using a technique called imputation to replace those missing values. However, assessing how well the imputation method performs is challenging. In this work, we demonstrate why people should care about imputation, develop a new method for assessing imputation quality, and demonstrate that if we build AI models on poorly imputed data, the model can give different results to those we would hope for. Our findings may improve the utility and quality of AI models in the clinic.

SELECTION OF CITATIONS
SEARCH DETAIL
...