Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 106
Filter
1.
J Natl Cancer Inst ; 116(6): 795-799, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38419575

ABSTRACT

There is growing interest in multicancer detection tests, which identify molecular signals in the blood that indicate a potential preclinical cancer. A key stage in evaluating these tests is a prediagnostic performance study, in which investigators store specimens from asymptomatic individuals and later test stored specimens from patients with cancer and a random sample of controls to determine predictive performance. Performance metrics include rates of cancer-specific true-positive and false-positive findings and a cancer-specific positive predictive value, with the latter compared with a decision-analytic threshold. The sample size trade-off method, which trades imprecise targeting of the true-positive rate for precise targeting of a zero-false-positive rate can substantially reduce sample size while increasing the lower bound of the positive predictive value. For a 1-year follow-up, with ovarian cancer as the rarest cancer considered, the sample size trade-off method yields a sample size of 163 000 compared with a sample size of 720 000, based on standard calculations. These design and analysis recommendations should be considered in planning a specimen repository and in the prediagnostic evaluation of multicancer detection tests.


Subject(s)
Early Detection of Cancer , Neoplasms , Humans , Neoplasms/diagnosis , Neoplasms/blood , Early Detection of Cancer/methods , Biomarkers, Tumor/blood , Research Design , Sample Size , Predictive Value of Tests , Female , Ovarian Neoplasms/diagnosis , Ovarian Neoplasms/blood , False Positive Reactions
2.
Med Decis Making ; 44(1): 53-63, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37990924

ABSTRACT

BACKGROUND: The test tradeoff curve helps investigators decide if collecting data for risk prediction is worthwhile when risk prediction is used for treatment decisions. At a given benefit-cost ratio (the number of false-positive predictions one would trade for a true positive prediction) or risk threshold (the probability of developing disease at indifference between treatment and no treatment), the test tradeoff is the minimum number of data collections per true positive to yield a positive maximum expected utility of risk prediction. For example, a test tradeoff of 3,000 invasive tests per true-positive prediction of cancer may suggest that risk prediction is not worthwhile. A test tradeoff curve plots test tradeoff versus benefit-cost ratio or risk threshold. The test tradeoff curve evaluates risk prediction at the optimal risk score cutpoint for treatment, which is the cutpoint of the risk score (the estimated risk of developing disease) that maximizes the expected utility of risk prediction when the receiver-operating characteristic (ROC) curve is concave. METHODS: Previous methods for estimating the test tradeoff required grouping risk scores. Using individual risk scores, the new method estimates a concave ROC curve by constructing a concave envelope of ROC points, taking a slope-based moving average, minimizing a sum of squared errors, and connecting successive ROC points with line segments. RESULTS: The estimated concave ROC curve yields an estimated test tradeoff curve. Analyses of 2 synthetic data sets illustrate the method. CONCLUSION: Estimating the test tradeoff curve based on individual risk scores is straightforward to implement and more appealing than previous estimation methods that required grouping risk scores. HIGHLIGHTS: The test tradeoff curve helps investigators decide if collecting data for risk prediction is worthwhile when risk prediction is used for treatment decisions.At a given benefit-cost ratio or risk threshold, the test tradeoff is the minimum number of data collections per true positive to yield a positive maximum expected utility of risk prediction.Unlike previous estimation methods that grouped risk scores, the method uses individual risk scores to estimate a concave ROC curve, which yields an estimated test tradeoff curve.


Subject(s)
Risk Factors , Humans , ROC Curve
3.
JNCI Cancer Spectr ; 5(1)2021 02.
Article in English | MEDLINE | ID: mdl-34957374

ABSTRACT

There is growing interest in the use of polygenic risk scores based on genetic variants to predict cancer incidence. The type of metric used to evaluate the predictive performance of polygenic risk scores plays a crucial role in their interpretation. I compare 3 metrics for this evaluation: the area under the receiver operating characteristic curve (AUC), the probability of cancer in a high-risk subset divided by the prevalence of cancer in the population, which I call the subset relative risk (SRR), and the minimum test tradeoff, which is the minimum number of genetic variant ascertainments (one per person) for each correct prediction of cancer to yield a positive expected clinical utility. I show that SRR is a relabeling of AUC. I recommend the minimum test tradeoff for the evaluation of polygenic risk scores because, unlike AUC and SRR, it is directly related to the expected clinical utility.


Subject(s)
Genetic Variation , Neoplasms/genetics , ROC Curve , Risk , Breast Neoplasms/epidemiology , Breast Neoplasms/genetics , Clinical Decision Rules , Costs and Cost Analysis , Female , Humans , Neoplasms/epidemiology , Prevalence , Probability , Risk Factors
4.
Carcinogenesis ; 42(8): 1023-1025, 2021 08 19.
Article in English | MEDLINE | ID: mdl-34128969
5.
Stat Med ; 40(6): 1429-1439, 2021 03 15.
Article in English | MEDLINE | ID: mdl-33314199

ABSTRACT

Interval cancers are cancers detected symptomatically between screens or after the last screen. A mathematical model for the development of interval cancers can provide useful information for evaluating cancer screening. In this regard a useful quantity is MIC, the mean duration in years of progressive preclinical cancer (PPC) that leads to interval cancers. Estimation of MIC involved extending a previous model to include three negative screens, invoking the multinomial-Poisson transformation to avoid estimating background cancer trends, and varying screening test sensitivity. Simulations show that when the true MIC is 0.5, the method yields a reasonably narrow range of estimated MICs over the range of screening test sensitivities from 0.5 to 1.0. If the lower bound on the screening test sensitivity is 0.7, the method performs considerably better even for larger MICs. The application of the method involved annual lung cancer screening in the Prostate, Lung, Colorectal, and Ovarian trial. Assuming a normal distribution for PPC duration, the estimated MIC (95% confidence interval) ranged from 0.00 (0.00 to 0.34) at a screening test sensitivity of 1.0 to 0.54 (0.03, 1.00) at a screening test sensitivity of 0.5 Assuming an exponential distribution for PPC duration, which did not fit as well, the estimated MIC ranged from 0.27 (0.08, 0.49) at a screening test sensitivity of 0.5 to 0.73 (0.32, 1.26) at a screen test sensitivity of 1.0 Based on these results, investigators may wish to investigate more frequent lung cancer screening.


Subject(s)
Breast Neoplasms , Lung Neoplasms , Early Detection of Cancer , Humans , Lung Neoplasms/diagnosis , Male , Mass Screening , Negative Results
6.
J Med Screen ; 28(2): 185-192, 2021 06.
Article in English | MEDLINE | ID: mdl-32838665

ABSTRACT

OBJECTIVE: According to the Independent UK Panel on Breast Cancer Screening, the most reliable estimates of overdiagnosis for breast cancer screening come from stop-screen trials Canada 1, Canada 2, and Malmo. The screen-interval overdiagnosis fraction is the fraction of cancers in a screening program that are overdiagnosed. We used the cumulative incidence method to estimate screen-interval overdiagnosis fraction. Our goal was to derive confidence intervals for estimated screen-interval overdiagnosis fraction and adjust for refusers in these trials. METHODS: We first show that the UK Panel's use of a 95% binomial confidence interval for estimated screen-interval overdiagnosis fraction was incorrect. We then derive a correct 95% binomial-Poisson confidence interval. We also use the method of latent-class instrumental variables to adjust for refusers. RESULTS: For the Canada 1 trial, the estimated screen-interval overdiagnosis fraction was 0.23 with a 95% binomial confidence interval of (0.18, 0.27) and a 95% binomial-Poisson confidence interval of (0.04, 0.41). For the Canada 2 trial, the estimated screen-interval overdiagnosis fraction was 0.16 with a 95% binomial confidence interval of (0.12, 0.19) and a 95% binomial-Poisson confidence interval of (-0.01, 0.32). For the Malmo trial, the estimated screen-interval overdiagnosis fraction was 0.19 with a 95% binomial confidence interval of (0.15, 0.22). Adjusting for refusers, the estimated screen-interval overdiagnosis fraction was 0.26 with a 95% binomial-Poisson confidence interval of (0.03, 0.50). CONCLUSION: The correct 95% binomial-Poisson confidence interval s for the estimated screen-interval overdiagnosis fraction based on the Canada 1, Canada 2, and Malmo stop-screen trials are much wider than the previously reported incorrect 95% binomial confidence intervals. The 95% binomial-Poisson confidence intervals widen as follow-up time increases, an unappreciated downside of longer follow-up in stop-screen trials.


Subject(s)
Breast Neoplasms , Breast Neoplasms/diagnosis , Early Detection of Cancer , Female , Humans , Incidence , Mammography , Mass Screening , Medical Overuse , Uncertainty
7.
Biomark Insights ; 15: 1177271920946715, 2020.
Article in English | MEDLINE | ID: mdl-32821082

ABSTRACT

We review simple methods for evaluating 4 types of biomarkers. First, we discuss the evaluation of surrogate endpoint biomarkers (to shorten a randomized trial) using 2 statistical and 3 biological criteria. Second, we discuss the evaluation of prognostic biomarkers (to predict the risk of disease) by comparing data collection costs with the anticipated net benefit of risk prediction. Third, we discuss the evaluation of predictive markers (to search for a promising subgroup in a randomized trial) using a multivariate subpopulation treatment effect pattern plot involving a risk difference or responders-only benefit function. Fourth, we discuss the evaluation of cancer screening biomarkers (to predict cancer in asymptomatic persons) using methodology to substantially reduce the sample size with stored specimens.

8.
Med Hypotheses ; 144: 110056, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32758893

ABSTRACT

The limiting step in cancer prevention is a lack of understanding of cancer biology. This limitation is exacerbated by a focus on the dominant somatic mutation theory (that driver mutations cause cancer) with little consideration of alternative theories of carcinogenesis. The recently proposed detached pericyte hypothesis explains many puzzling phenomena in cancer biology for which the somatic mutation theory offers no obvious explanation. These puzzling phenomena include foreign-body tumorigenesis, the link between denervation and cancer, tumors in transgenic mice that lack the inducing mutation, and non-genotoxic carcinogens. The detached pericyte hypothesis postulates that (1) a carcinogen or chronic inflammation causes pericytes to detach from blood cell walls, (2) some detached pericytes develop into myofibroblasts which alter the extracellular matrix (3) some detached pericytes develop into mesenchymal stem cells, (4) some of the mesenchymal stem cells adhere to the altered extracellular matrix (5) the altered extracellular matrix disrupts regulatory controls, causing the adjacent mesenchymal stem cells to develop into tumors. Results from experimental studies support the detached pericyte hypothesis. If the detached pericyte hypothesis is correct, pericytes should play a key role in metastasis - a testable prediction. Recent experimental results confirm this prediction and motivate a proposed experiment to partially test the detached pericyte hypothesis. If the detached pericyte hypothesis is correct, it could lead to new strategies for cancer prevention.


Subject(s)
Mesenchymal Stem Cells , Pericytes , Animals , Carcinogenesis , Mice , Mice, Transgenic , Myofibroblasts
9.
Ann Intern Med ; 172(11): 775-776, 2020 06 02.
Article in English | MEDLINE | ID: mdl-32479148
10.
Article in English | MEDLINE | ID: mdl-32206075

ABSTRACT

A key aspect of the article by Lousdal on instrumental variables was a discussion of the monotonicity assumption. However, there was no mention of the history of the development of this assumption. The purpose of this letter is to note that Baker and Lindeman and Imbens and Angrist independently introduced the monotonicity assumption into the analysis of instrumental variables. The letter also places the monotonicity assumption in the context of the method of latent class instrumental variables.

11.
Biometrics ; 76(4): 1383-1384, 2020 12.
Article in English | MEDLINE | ID: mdl-32108321
12.
Stat Med ; 38(22): 4453-4474, 2019 09 30.
Article in English | MEDLINE | ID: mdl-31392751

ABSTRACT

Many clinical or prevention studies involve missing or censored outcomes. Maximum likelihood (ML) methods provide a conceptually straightforward approach to estimation when the outcome is partially missing. Methods of implementing ML methods range from the simple to the complex, depending on the type of data and the missing-data mechanism. Simple ML methods for ignorable missing-data mechanisms (when data are missing at random) include complete-case analysis, complete-case analysis with covariate adjustment, survival analysis with covariate adjustment, and analysis via propensity-to-be-missing scores. More complex ML methods for ignorable missing-data mechanisms include the analysis of longitudinal dropouts via a marginal model for continuous data or a conditional model for categorical data. A moderately complex ML method for categorical data with a saturated model and either ignorable or nonignorable missing-data mechanisms is a perfect fit analysis, an algebraic method involving closed-form estimates and variances. A complex and flexible ML method with categorical data and either ignorable or nonignorable missing-data mechanisms is the method of composite linear models, a matrix method requiring specialized software. Except for the method of composite linear models, which can involve challenging matrix specifications, the implementation of these ML methods ranges in difficulty from easy to moderate.


Subject(s)
Bias , Likelihood Functions , Computer Simulation , Data Interpretation, Statistical , Humans , Models, Statistical , Propensity Score , Randomized Controlled Trials as Topic , Survival Analysis
13.
Ann Intern Med ; 170(9): 664-665, 2019 05 07.
Article in English | MEDLINE | ID: mdl-31060070

Subject(s)
Biostatistics
14.
Med Decis Making ; 39(5): 489-490, 2019 07.
Article in English | MEDLINE | ID: mdl-31104590
16.
Med Decis Making ; 39(2): 130-136, 2019 02.
Article in English | MEDLINE | ID: mdl-30658540

ABSTRACT

BACKGROUND: Studies to validate a cancer prediction model based on cancer screening markers collected in stored specimens from asymptomatic persons typically require large specimen collection sample sizes. A standard sample size calculation targets a true-positive rate (TPR) of 0.8 with a 2.5% lower bound of 0.7 at a false-positive rate (FPR) of 0.01 with a 5% upper bound of 0.03. If the probability of developing cancer during the study is P = 0.01, the specimen collection sample size based on the standard calculation is 7600. METHODS: The strategy to reduce the specimen collection sample size is to decrease both the lower bound of TPR and the upper bound of FPR while keeping a positive lower bound on the anticipated clinical utility. RESULTS: The new sample size calculation targets a TPR of 0.4 with a 2.5% lower bound of 0.10 and an FPR of 0.0 with a 5% upper bound of 0.005. With P = 0.01, the specimen collection sample size based on the new calculation is 1800 instead of 7600. LIMITATIONS: The new sample size calculation requires a minimum benefit/cost ratio (number of false positives traded for a true positive). With P = 0.01, the minimum cost-benefit ratio is 5, which is plausible in many studies. CONCLUSION: In validation studies for cancer screening markers, the strategy can substantially reduce the specimen collection sample size, substantially reducing costs and making some otherwise infeasible studies now feasible.


Subject(s)
Cost-Benefit Analysis , Early Detection of Cancer , False Positive Reactions , Mass Screening , Models, Biological , Neoplasms/diagnosis , Sample Size , Biomarkers , Humans , Probability , Reproducibility of Results , Research Design , Validation Studies as Topic
17.
Med Decis Making ; 38(8): 903, 2018 11.
Article in English | MEDLINE | ID: mdl-30403581
20.
Pancreas ; 47(2): 135-141, 2018 02.
Article in English | MEDLINE | ID: mdl-29346214

ABSTRACT

Pancreatic cancer is the third leading cause of cancer death in the United States, and the 5-year relative survival for patients diagnosed with pancreatic cancer is less than 10%. Early intervention is the key to a better survival outcome. Currently, there are no biomarkers that can reliably detect pancreatic cancer at an early stage or identify precursors that are destined to progress to malignancy. The National Cancer Institute in partnership with the Kenner Family Research Fund and the Pancreatic Cancer Action Network convened a Data Jamboree on Biomarkers workshop on December 5, 2016, to discuss and evaluate existing or newly developed biomarkers and imaging methods for early detection of pancreatic cancer. The primary goal of this workshop was to determine if there are any promising biomarkers for early detection of pancreatic cancer that are ready for clinical validation. The Alliance of Pancreatic Cancer Consortia for Biomarkers for Early Detection, formed under the auspices of this workshop, will provide the common platform and the resources necessary for validation. Although none of the biomarkers evaluated seemed ready for a large-scale biomarker validation trial, a number of them had sufficiently high sensitivity and specificity to warrant additional research, especially if combined with other biomarkers to form a panel.


Subject(s)
Biomarkers, Tumor/blood , Early Detection of Cancer/methods , Pancreatic Neoplasms/blood , Pancreatic Neoplasms/diagnosis , Biomarkers, Tumor/genetics , Genetic Predisposition to Disease/genetics , Humans , Mutation , Pancreatic Neoplasms/genetics , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...