Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Am Heart J ; 274: 23-31, 2024 08.
Article in English | MEDLINE | ID: mdl-38701962

ABSTRACT

Clinicians often suspect that a treatment effect can vary across individuals. However, they usually lack "evidence-based" guidance regarding potential heterogeneity of treatment effects (HTE). Potentially actionable HTE is rarely discovered in clinical trials and is widely believed (or rationalized) by researchers to be rare. Conventional statistical methods to test for possible HTE are extremely conservative and tend to reinforce this belief. In truth, though, there is no realistic way to know whether a common, or average, effect estimated from a clinical trial is relevant for all, or even most, patients. This absence of evidence, misinterpreted as evidence of absence, may be resulting in sub-optimal treatment for many individuals. We first summarize the historical context in which current statistical methods for randomized controlled trials (RCTs) were developed, focusing on the conceptual and technical limitations that shaped, and restricted, these methods. In particular, we explain how the common-effect assumption came to be virtually unchallenged. Second, we propose a simple graphical method for exploratory data analysis that can provide useful visual evidence of possible HTE. The basic approach is to display the complete distribution of outcome data rather than relying uncritically on simple summary statistics. Modern graphical methods, unavailable when statistical methods were initially formulated a century ago, now render fine-grained interrogation of the data feasible. We propose comparing observed treatment-group data to "pseudo data" engineered to mimic that which would be expected under a particular HTE model, such as the common-effect model. A clear discrepancy between the distributions of the common-effect pseudo data and the actual treatment-effect data provides prima facie evidence of HTE to motivate additional confirmatory investigation. Artificial data are used to illustrate implications of ignoring heterogeneity in practice and how the graphical method can be useful.


Subject(s)
Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/methods , Evidence-Based Medicine/methods , Treatment Outcome , Data Interpretation, Statistical , Treatment Effect Heterogeneity
2.
Clin Trials ; 12(4): 357-64, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26062595

ABSTRACT

BACKGROUND: There is currently much interest in generating more individualized estimates of treatment effects. However, traditional statistical methods are not well suited to this task. Post hoc subgroup analyses of clinical trials are fraught with methodological problems. We suggest that the alternative research paradigm of predictive analytics, widely used in many business contexts, can be adapted to help. METHODS: We compare the statistical and analytics perspectives and suggest that predictive modeling should often replace subgroup analysis. We then introduce a new approach, cadit modeling, that can be useful to identify and test individualized causal effects. RESULTS: The cadit technique is particularly useful in the context of selecting from among a large number of potential predictors. We describe a new variable-selection algorithm that has been applied in conjunction with cadit. The cadit approach is illustrated through a reanalysis of data from the Randomized Aldactone Evaluation Study trial, which studied the efficacy of spironolactone in heart-failure patients. The trial was successful, but a serious adverse effect (hyperkalemia) was subsequently discovered. Our reanalysis suggests that it may be possible to predict the degree of hyperkalemia based on a logistic model and to identify a subgroup in which the effect is negligible. CONCLUSION: Cadit modeling is a promising alternative to subgroup analyses. Cadit regression is relatively straightforward to implement, generates results that are easy to present and explain, and can mesh straightforwardly with many variable-selection algorithms.


Subject(s)
Data Interpretation, Statistical , Randomized Controlled Trials as Topic , Regression Analysis , Algorithms , Diuretics/therapeutic use , Female , Forecasting , Heart Failure/drug therapy , Humans , Male , Models, Statistical , Randomized Controlled Trials as Topic/statistics & numerical data , Spironolactone/therapeutic use , Treatment Outcome
3.
Clin Trials ; 6(2): 109-18, 2009 Apr.
Article in English | MEDLINE | ID: mdl-19342462

ABSTRACT

BACKGROUND: Although the superior internal validity of the randomized clinical trial (RCT) is invaluable to establish causality, generalizability is far from guaranteed. In particular, strict selection criteria intended to maximize treatment efficacy and safety can impair external validity. This problem is widely acknowledged in principle but sometimes ignored in practice, with considerable consequences for treatment options. PURPOSE: We demonstrate how selection of patients for an RCT can bias the results when the treatment effect varies across individuals. Indeed, not only the magnitude, but even the direction of the causal effect found in an RCT can differ from the causal effect in the target population. METHODS: A counterfactual model is developed to represent the selection process explicitly. This simple extension of the standard counterfactual model is used to explore the implications of restrictive exclusion criteria intended to eliminate high-risk individuals. The counterintuitive findings of a recent FDA meta-analysis of suicidality in pediatric populations treated with antidepressant medications are interpreted in the light of this counterfactual model. RESULTS: When the causal effect of an intervention can vary across individuals, the potential for selection bias (in the sense of a threat to external validity) can be serious. In particular, we demonstrate that the stricter the inclusion/exclusion criteria the greater the potential inflation of relative risk. A critical factor in determining bias is the extent to which individuals with differing types of causal effects can be distinguished prior to sampling. Furthermore, we propose methods that can sometimes be useful to identify the existence of bias in an actual study. When applied to the FDA meta-analysis of pediatric suicidality in RCTs of modern antidepressant medications, these methods suggest that the elevated risk observed may be an artifact of selection bias. LIMITATIONS: Real-life scenarios are generally more complex than the counterfactual model presented here. Future modeling efforts are needed to refine and extend our approach. CONCLUSIONS: When variation of treatment effects across individuals is plausible, lack of generalizability should be a serious concern. Therefore, external validity of RCTs needs to be carefully considered in the design of an RCT and the interpretation of its results, especially when the study can influence regulatory decisions about drug safety. RCTs should not automatically be considered definitive, especially when their results conflict with those of observational studies. Whenever possible, empirical evidence of bias resulting from sample selection should be obtained and taken into account.


Subject(s)
Antidepressive Agents/adverse effects , Decision Support Techniques , Depressive Disorder/drug therapy , Models, Theoretical , Patient Selection , Randomized Controlled Trials as Topic , Suicide/statistics & numerical data , Adolescent , Child , Child, Preschool , Depressive Disorder/epidemiology , Humans , Meta-Analysis as Topic , Reproducibility of Results , Research Design , Risk , Selection Bias , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...