Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 6.093
Filter
1.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38837900

ABSTRACT

Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.


Subject(s)
Computer Simulation , Confidence Intervals , Humans , Biometry/methods , Models, Statistical , Data Interpretation, Statistical , Random Allocation , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/methods
2.
BMC Med Res Methodol ; 24(1): 130, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38840047

ABSTRACT

BACKGROUND: Faced with the high cost and limited efficiency of classical randomized controlled trials, researchers are increasingly applying adaptive designs to speed up the development of new drugs. However, the application of adaptive design to drug randomized controlled trials (RCTs) and whether the reporting is adequate are unclear. Thus, this study aimed to summarize the epidemiological characteristics of the relevant trials and assess their reporting quality by the Adaptive designs CONSORT Extension (ACE) checklist. METHODS: We searched MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL) and ClinicalTrials.gov from inception to January 2020. We included drug RCTs that explicitly claimed to be adaptive trials or used any type of adaptative design. We extracted the epidemiological characteristics of included studies to summarize their adaptive design application. We assessed the reporting quality of the trials by Adaptive designs CONSORT Extension (ACE) checklist. Univariable and multivariable linear regression models were used to the association of four prespecified factors with the quality of reporting. RESULTS: Our survey included 108 adaptive trials. We found that adaptive design has been increasingly applied over the years, and was commonly used in phase II trials (n = 45, 41.7%). The primary reasons for using adaptive design were to speed the trial and facilitate decision-making (n = 24, 22.2%), maximize the benefit of participants (n = 21, 19.4%), and reduce the total sample size (n = 15, 13.9%). Group sequential design (n = 63, 58.3%) was the most frequently applied method, followed by adaptive randomization design (n = 26, 24.1%), and adaptive dose-finding design (n = 24, 22.2%). The proportion of adherence to the ACE checklist of 26 topics ranged from 7.4 to 99.1%, with eight topics being adequately reported (i.e., level of adherence ≥ 80%), and eight others being poorly reported (i.e., level of adherence ≤ 30%). In addition, among the seven items specific for adaptive trials, three were poorly reported: accessibility to statistical analysis plan (n = 8, 7.4%), measures for confidentiality (n = 14, 13.0%), and assessments of similarity between interim stages (n = 25, 23.1%). The mean score of the ACE checklist was 13.9 (standard deviation [SD], 3.5) out of 26. According to our multivariable regression analysis, later published trials (estimated ß = 0.14, p < 0.01) and the multicenter trials (estimated ß = 2.22, p < 0.01) were associated with better reporting. CONCLUSION: Adaptive design has shown an increasing use over the years, and was primarily applied to early phase drug trials. However, the reporting quality of adaptive trials is suboptimal, and substantial efforts are needed to improve the reporting.


Subject(s)
Randomized Controlled Trials as Topic , Research Design , Humans , Research Design/standards , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/standards , Checklist/methods , Checklist/standards , Clinical Trials, Phase II as Topic/methods , Clinical Trials, Phase II as Topic/statistics & numerical data , Clinical Trials, Phase II as Topic/standards
3.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38801258

ABSTRACT

In comparative studies, covariate balance and sequential allocation schemes have attracted growing academic interest. Although many theoretically justified adaptive randomization methods achieve the covariate balance, they often allocate patients in pairs or groups. To better meet the practical requirements where the clinicians cannot wait for other participants to assign the current patient for some economic or ethical reasons, we propose a method that randomizes patients individually and sequentially. The proposed method conceptually separates the covariate imbalance, measured by the newly proposed modified Mahalanobis distance, and the marginal imbalance, that is the sample size difference between the 2 groups, and it minimizes them with an explicit priority order. Compared with the existing sequential randomization methods, the proposed method achieves the best possible covariate balance while maintaining the marginal balance directly, offering us more control of the randomization process. We demonstrate the superior performance of the proposed method through a wide range of simulation studies and real data analysis, and also establish theoretical guarantees for the proposed method in terms of both the convergence of the imbalance measure and the subsequent treatment effect estimation.


Subject(s)
Computer Simulation , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/methods , Biometry/methods , Models, Statistical , Data Interpretation, Statistical , Random Allocation , Sample Size , Algorithms
4.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38742906

ABSTRACT

Semicompeting risks refer to the phenomenon that the terminal event (such as death) can censor the nonterminal event (such as disease progression) but not vice versa. The treatment effect on the terminal event can be delivered either directly following the treatment or indirectly through the nonterminal event. We consider 2 strategies to decompose the total effect into a direct effect and an indirect effect under the framework of mediation analysis in completely randomized experiments by adjusting the prevalence and hazard of nonterminal events, respectively. They require slightly different assumptions on cross-world quantities to achieve identifiability. We establish asymptotic properties for the estimated counterfactual cumulative incidences and decomposed treatment effects. We illustrate the subtle difference between these 2 decompositions through simulation studies and two real-data applications in the Supplementary Materials.


Subject(s)
Computer Simulation , Humans , Models, Statistical , Risk , Randomized Controlled Trials as Topic/statistics & numerical data , Mediation Analysis , Treatment Outcome , Biometry/methods
5.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38819309

ABSTRACT

Doubly adaptive biased coin design (DBCD), a response-adaptive randomization scheme, aims to skew subject assignment probabilities based on accrued responses for ethical considerations. Recent years have seen substantial advances in understanding DBCD's theoretical properties, assuming correct model specification for the responses. However, concerns have been raised about the impact of model misspecification on its design and analysis. In this paper, we assess the robustness to both design model misspecification and analysis model misspecification under DBCD. On one hand, we confirm that the consistency and asymptotic normality of the allocation proportions can be preserved, even when the responses follow a distribution other than the one imposed by the design model during the implementation of DBCD. On the other hand, we extensively investigate three commonly used linear regression models for estimating and inferring the treatment effect, namely difference-in-means, analysis of covariance (ANCOVA) I, and ANCOVA II. By allowing these regression models to be arbitrarily misspecified, thereby not reflecting the true data generating process, we derive the consistency and asymptotic normality of the treatment effect estimators evaluated from the three models. The asymptotic properties show that the ANCOVA II model, which takes covariate-by-treatment interaction terms into account, yields the most efficient estimator. These results can provide theoretical support for using DBCD in scenarios involving model misspecification, thereby promoting the widespread application of this randomization procedure.


Subject(s)
Models, Statistical , Random Allocation , Humans , Computer Simulation , Randomized Controlled Trials as Topic/statistics & numerical data , Linear Models , Biometry/methods , Data Interpretation, Statistical , Bias , Analysis of Variance , Research Design
6.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38804219

ABSTRACT

Sequential multiple assignment randomized trials (SMARTs) are the gold standard for estimating optimal dynamic treatment regimes (DTRs), but are costly and require a large sample size. We introduce the multi-stage augmented Q-learning estimator (MAQE) to improve efficiency of estimation of optimal DTRs by augmenting SMART data with observational data. Our motivating example comes from the Back Pain Consortium, where one of the overarching aims is to learn how to tailor treatments for chronic low back pain to individual patient phenotypes, knowledge which is lacking clinically. The Consortium-wide collaborative SMART and observational studies within the Consortium collect data on the same participant phenotypes, treatments, and outcomes at multiple time points, which can easily be integrated. Previously published single-stage augmentation methods for integration of trial and observational study (OS) data were adapted to estimate optimal DTRs from SMARTs using Q-learning. Simulation studies show the MAQE, which integrates phenotype, treatment, and outcome information from multiple studies over multiple time points, more accurately estimates the optimal DTR, and has a higher average value than a comparable Q-learning estimator without augmentation. We demonstrate this improvement is robust to a wide range of trial and OS sample sizes, addition of noise variables, and effect sizes.


Subject(s)
Computer Simulation , Low Back Pain , Observational Studies as Topic , Randomized Controlled Trials as Topic , Humans , Observational Studies as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/statistics & numerical data , Low Back Pain/therapy , Sample Size , Treatment Outcome , Models, Statistical , Biometry/methods
7.
Crit Care ; 28(1): 184, 2024 05 28.
Article in English | MEDLINE | ID: mdl-38807143

ABSTRACT

BACKGROUND: The use of composite outcome measures (COM) in clinical trials is increasing. Whilst their use is associated with benefits, several limitations have been highlighted and there is limited literature exploring their use within critical care. The primary aim of this study was to evaluate the use of COM in high-impact critical care trials, and compare study parameters (including sample size, statistical significance, and consistency of effect estimates) in trials using composite versus non-composite outcomes. METHODS: A systematic review of 16 high-impact journals was conducted. Randomised controlled trials published between 2012 and 2022 reporting a patient important outcome and involving critical care patients, were included. RESULTS: 8271 trials were screened, and 194 included. 39.1% of all trials used a COM and this increased over time. Of those using a COM, only 52.6% explicitly described the outcome as composite. The median number of components was 2 (IQR 2-3). Trials using a COM recruited fewer participants (409 (198.8-851.5) vs 584 (300-1566, p = 0.004), and their use was not associated with increased rates of statistical significance (19.7% vs 17.8%, p = 0.380). Predicted effect sizes were overestimated in all but 6 trials. For studies using a COM the effect estimates were consistent across all components in 43.4% of trials. 93% of COM included components that were not patient important. CONCLUSIONS: COM are increasingly used in critical care trials; however effect estimates are frequently inconsistent across COM components confounding outcome interpretations. The use of COM was associated with smaller sample sizes, and no increased likelihood of statistically significant results. Many of the limitations inherent to the use of COM are relevant to critical care research.


Subject(s)
Critical Care , Outcome Assessment, Health Care , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Critical Care/methods , Critical Care/statistics & numerical data , Critical Care/standards , Outcome Assessment, Health Care/statistics & numerical data , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/standards , Journal Impact Factor
8.
BMC Med Res Methodol ; 24(1): 121, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38822242

ABSTRACT

BACKGROUND: Inequities in health access and outcomes exist between Indigenous and non-Indigenous populations. Embedded pragmatic randomized, controlled trials (ePCTs) can test the real-world effectiveness of health care interventions. Assessing readiness for ePCT, with tools such as the Readiness Assessment for Pragmatic Trials (RAPT) model, is an important component. Although equity must be explicitly incorporated in the design, testing, and widespread implementation of any health care intervention to achieve equity, RAPT does not explicitly consider equity. This study aimed to identify adaptions necessary for the application of the 'Readiness Assessment for Pragmatic Trials' (RAPT) tool in embedded pragmatic randomized, controlled trials (ePCTs) with Indigenous communities. METHODS: We surveyed and interviewed participants (researchers with experience in research involving Indigenous communities) over three phases (July-December 2022) in this mixed-methods study to explore the appropriateness and recommended adaptions of current RAPT domains and to identify new domains that would be appropriate to include. We thematically analyzed responses and used an iterative process to modify RAPT. RESULTS: The 21 participants identified that RAPT needed to be modified to strengthen readiness assessment in Indigenous research. In addition, five new domains were proposed to support Indigenous communities' power within the research processes: Indigenous Data Sovereignty; Acceptability - Indigenous Communities; Risk of Research; Research Team Experience; Established Partnership). We propose a modified tool, RAPT-Indigenous (RAPT-I) for use in research with Indigenous communities to increase the robustness and cultural appropriateness of readiness assessment for ePCT. In addition to producing a tool for use, it outlines a methodological approach to adopting research tools for use in and with Indigenous communities by drawing on the experience of researchers who are part of, and/or working with, Indigenous communities to undertake interventional research, as well as those with expertise in health equity, implementation science, and public health. CONCLUSION: RAPT-I has the potential to provide a useful framework for readiness assessment prior to ePCT in Indigenous communities. RAPT-I also has potential use by bodies charged with critically reviewing proposed pragmatic research including funding and ethics review boards.


Subject(s)
Indigenous Peoples , Pragmatic Clinical Trials as Topic , Humans , Indigenous Peoples/statistics & numerical data , Pragmatic Clinical Trials as Topic/methods , Health Services, Indigenous/standards , Surveys and Questionnaires , Research Design , Health Services Accessibility/statistics & numerical data , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data
9.
BMC Med Res Methodol ; 24(1): 99, 2024 Apr 27.
Article in English | MEDLINE | ID: mdl-38678213

ABSTRACT

PURPOSE: In the literature, the propriety of the meta-analytic treatment-effect produced by combining randomized controlled trials (RCT) and non-randomized studies (NRS) is questioned, given the inherent confounding in NRS that may bias the meta-analysis. The current study compared an implicitly principled pooled Bayesian meta-analytic treatment-effect with that of frequentist pooling of RCT and NRS to determine how well each approach handled the NRS bias. MATERIALS & METHODS: Binary outcome Critical-Care meta-analyses, reflecting the importance of such outcomes in Critical-Care practice, combining RCT and NRS were identified electronically. Bayesian pooled treatment-effect and 95% credible-intervals (BCrI), posterior model probabilities indicating model plausibility and Bayes-factors (BF) were estimated using an informative heavy-tailed heterogeneity prior (half-Cauchy). Preference for pooling of RCT and NRS was indicated for Bayes-factors > 3 or < 0.333 for the converse. All pooled frequentist treatment-effects and 95% confidence intervals (FCI) were re-estimated using the popular DerSimonian-Laird (DSL) random effects model. RESULTS: Fifty meta-analyses were identified (2009-2021), reporting pooled estimates in 44; 29 were pharmaceutical-therapeutic and 21 were non-pharmaceutical therapeutic. Re-computed pooled DSL FCI excluded the null (OR or RR = 1) in 86% (43/50). In 18 meta-analyses there was an agreement between FCI and BCrI in excluding the null. In 23 meta-analyses where FCI excluded the null, BCrI embraced the null. BF supported a pooled model in 27 meta-analyses and separate models in 4. The highest density of the posterior model probabilities for 0.333 < Bayes factor < 1 was 0.8. CONCLUSIONS: In the current meta-analytic cohort, an integrated and multifaceted Bayesian approach gave support to including NRS in a pooled-estimate model. Conversely, caution should attend the reporting of naïve frequentist pooled, RCT and NRS, meta-analytic treatment effects.


Subject(s)
Bayes Theorem , Meta-Analysis as Topic , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Non-Randomized Controlled Trials as Topic/methods , Bias , Models, Statistical
10.
BMC Med Res Methodol ; 24(1): 101, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38689224

ABSTRACT

BACKGROUND: Vaccine efficacy (VE) assessed in a randomized controlled clinical trial can be affected by demographic, clinical, and other subject-specific characteristics evaluated as baseline covariates. Understanding the effect of covariates on efficacy is key to decisions by vaccine developers and public health authorities. METHODS: This work evaluates the impact of including correlate of protection (CoP) data in logistic regression on its performance in identifying statistically and clinically significant covariates in settings typical for a vaccine phase 3 trial. The proposed approach uses CoP data and covariate data as predictors of clinical outcome (diseased versus non-diseased) and is compared to logistic regression (without CoP data) to relate vaccination status and covariate data to clinical outcome. RESULTS: Clinical trial simulations, in which the true relationship between CoP data and clinical outcome probability is a sigmoid function, show that use of CoP data increases the positive predictive value for detection of a covariate effect. If the true relationship is characterized by a decreasing convex function, use of CoP data does not substantially change positive or negative predictive value. In either scenario, vaccine efficacy is estimated more precisely (i.e., confidence intervals are narrower) in covariate-defined subgroups if CoP data are used, implying that using CoP data increases the ability to determine clinical significance of baseline covariate effects on efficacy. CONCLUSIONS: This study proposes and evaluates a novel approach for assessing baseline demographic covariates potentially affecting VE. Results show that the proposed approach can sensitively and specifically identify potentially important covariates and provides a method for evaluating their likely clinical significance in terms of predicted impact on vaccine efficacy. It shows further that inclusion of CoP data can enable more precise VE estimation, thus enhancing study power and/or efficiency and providing even better information to support health policy and development decisions.


Subject(s)
Vaccine Efficacy , Humans , Logistic Models , Vaccine Efficacy/statistics & numerical data , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/methods , Vaccination/statistics & numerical data , Vaccination/methods , Vaccines/therapeutic use , Demography/statistics & numerical data , Computer Simulation , Clinical Trials, Phase III as Topic/statistics & numerical data , Clinical Trials, Phase III as Topic/methods
11.
Stat Methods Med Res ; 33(5): 909-927, 2024 May.
Article in English | MEDLINE | ID: mdl-38567439

ABSTRACT

Understanding whether and how treatment effects vary across subgroups is crucial to inform clinical practice and recommendations. Accordingly, the assessment of heterogeneous treatment effects based on pre-specified potential effect modifiers has become a common goal in modern randomized trials. However, when one or more potential effect modifiers are missing, complete-case analysis may lead to bias and under-coverage. While statistical methods for handling missing data have been proposed and compared for individually randomized trials with missing effect modifier data, few guidelines exist for the cluster-randomized setting, where intracluster correlations in the effect modifiers, outcomes, or even missingness mechanisms may introduce further threats to accurate assessment of heterogeneous treatment effect. In this article, the performance of several missing data methods are compared through a simulation study of cluster-randomized trials with continuous outcome and missing binary effect modifier data, and further illustrated using real data from the Work, Family, and Health Study. Our results suggest that multilevel multiple imputation and Bayesian multilevel multiple imputation have better performance than other available methods, and that Bayesian multilevel multiple imputation has lower bias and closer to nominal coverage than standard multilevel multiple imputation when there are model specification or compatibility issues.


Subject(s)
Bayes Theorem , Randomized Controlled Trials as Topic , Randomized Controlled Trials as Topic/statistics & numerical data , Humans , Cluster Analysis , Data Interpretation, Statistical , Bias , Models, Statistical , Treatment Outcome , Computer Simulation , Treatment Effect Heterogeneity
12.
Stat Med ; 43(13): 2622-2640, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38684331

ABSTRACT

Longitudinal clinical trials for which recurrent events endpoints are of interest are commonly subject to missing event data. Primary analyses in such trials are often performed assuming events are missing at random, and sensitivity analyses are necessary to assess robustness of primary analysis conclusions to missing data assumptions. Control-based imputation is an attractive approach in superiority trials for imposing conservative assumptions on how data may be missing not at random. A popular approach to implementing control-based assumptions for recurrent events is multiple imputation (MI), but Rubin's variance estimator is often biased for the true sampling variability of the point estimator in the control-based setting. We propose distributional imputation (DI) with corresponding wild bootstrap variance estimation procedure for control-based sensitivity analyses of recurrent events. We apply control-based DI to a type I diabetes trial. In the application and simulation studies, DI produced more reasonable standard error estimates than MI with Rubin's combining rules in control-based sensitivity analyses of recurrent events.


Subject(s)
Computer Simulation , Humans , Diabetes Mellitus, Type 1/drug therapy , Data Interpretation, Statistical , Models, Statistical , Recurrence , Longitudinal Studies , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/methods , Bias , Clinical Trials as Topic/statistics & numerical data
13.
JAMA Netw Open ; 7(4): e248818, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38687478

ABSTRACT

Importance: For the design of a randomized clinical trial (RCT), estimation of the expected event rate and effect size of an intervention is needed to calculate the sample size. Overestimation may lead to an underpowered trial. Objective: To evaluate the accuracy of published estimates of event rate and effect size in contemporary cardiovascular RCTs. Evidence Review: A systematic search was conducted in MEDLINE for multicenter cardiovascular RCTs associated with MeSH (Medical Subject Headings) terms for cardiovascular diseases published in the New England Journal of Medicine, JAMA, or the Lancet between January 1, 2010, and December 31, 2019. Identified trials underwent abstract review; eligible trials then underwent full review, and those with insufficiently reported data were excluded. Data were extracted from the original publication or the study protocol, and a random-effects model was used for data pooling. This review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guideline. The primary outcome was the accuracy of event rate and effect size estimation. Accuracy was determined by comparing the observed event rate in the control group and the effect size with their hypothesized values. Linear regression was used to determine the association between estimation accuracy and trial characteristics. Findings: Of the 873 RCTs identified, 374 underwent full review and 30 were subsequently excluded, resulting in 344 trials for analysis. The median observed event rate was 9.0% (IQR, 4.3% to 21.4%), which was significantly lower than the estimated event rate of 11.0% (IQR, 6.0% to 25.0%) with a median deviation of -12.3% (95% CI, -16.4% to -5.6%; P < .001). More than half of the trials (196 [61.1%]) overestimated the expected event rate. Accuracy of event rate estimation was associated with a higher likelihood of refuting the null hypothesis (0.13 [95% CI, 0.01 to 0.25]; P = .03). The median observed effect size in superiority trials was 0.91 (IQR, 0.74 to 0.99), which was significantly lower than the estimated effect size of 0.72 (IQR, 0.60 to 0.80), indicating a median overestimation of 23.1% (95% CI, 17.9% to 28.3%). A total of 216 trials (82.1%) overestimated the effect size. Conclusions and Relevance: In this systematic review of contemporary cardiovascular RCTs, event rates of the primary end point and effect sizes of an intervention were frequently overestimated. This overestimation may have contributed to the inability to adequately test the trial hypothesis.


Subject(s)
Cardiovascular Diseases , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/standards , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/standards , Sample Size
14.
Stat Methods Med Res ; 33(6): 1021-1042, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38676367

ABSTRACT

We propose a novel framework based on the RuleFit method to estimate heterogeneous treatment effect in randomized clinical trials. The proposed method estimates a rule ensemble comprising a set of prognostic rules, a set of prescriptive rules, as well as the linear effects of the original predictor variables. The prescriptive rules provide an interpretable description of the heterogeneous treatment effect. By including a prognostic term in the proposed model, the selected rule is represented as an heterogeneous treatment effect that excludes other effects. We confirmed that the performance of the proposed method was equivalent to that of other ensemble learning methods through numerical simulations and demonstrated the interpretation of the proposed method using a real data application.


Subject(s)
Models, Statistical , Randomized Controlled Trials as Topic , Humans , Prognosis , Randomized Controlled Trials as Topic/statistics & numerical data , Computer Simulation , Treatment Outcome , Algorithms , Causality , Treatment Effect Heterogeneity
15.
Stat Med ; 43(12): 2359-2367, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38565328

ABSTRACT

A multi-stage randomized trial design can significantly improve efficiency by allowing early termination of the trial when the experimental arm exhibits either low or high efficacy compared to the control arm during the study. However, proper inference methods are necessary because the underlying distribution of the target statistic changes due to the multi-stage structure. This article focuses on multi-stage randomized phase II trials with a dichotomous outcome, such as treatment response, and proposes exact conditional confidence intervals for the odds ratio. The usual single-stage confidence intervals are invalid when used in multi-stage trials. To address this issue, we propose a linear ordering of all possible outcomes. This ordering is conditioned on the total number of responders in each stage and utilizes the exact conditional distribution function of the outcomes. This approach enables the estimation of an exact confidence interval accounting for the multi-stage designs.


Subject(s)
Clinical Trials, Phase II as Topic , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Clinical Trials, Phase II as Topic/methods , Clinical Trials, Phase II as Topic/statistics & numerical data , Confidence Intervals , Odds Ratio , Models, Statistical , Computer Simulation , Research Design
16.
Stat Methods Med Res ; 33(5): 858-874, 2024 May.
Article in English | MEDLINE | ID: mdl-38505941

ABSTRACT

Platform trials are randomized clinical trials that allow simultaneous comparison of multiple interventions, usually against a common control. Arms to test experimental interventions may enter and leave the platform over time. This implies that the number of experimental intervention arms in the trial may change as the trial progresses. Determining optimal allocation rates to allocate patients to the treatment and control arms in platform trials is challenging because the optimal allocation depends on the number of arms in the platform and the latter typically varies over time. In addition, the optimal allocation depends on the analysis strategy used and the optimality criteria considered. In this article, we derive optimal treatment allocation rates for platform trials with shared controls, assuming that a stratified estimation and a testing procedure based on a regression model are used to adjust for time trends. We consider both, analysis using concurrent controls only as well as analysis methods using concurrent and non-concurrent controls and assume that the total sample size is fixed. The objective function to be minimized is the maximum of the variances of the effect estimators. We show that the optimal solution depends on the entry time of the arms in the trial and, in general, does not correspond to the square root of k allocation rule used in classical multi-arm trials. We illustrate the optimal allocation and evaluate the power and type 1 error rate compared to trials using one-to-one and square root of k allocations by means of a case study.


Subject(s)
Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/statistics & numerical data , Models, Statistical , Sample Size , Endpoint Determination/statistics & numerical data , Research Design
17.
Stat Methods Med Res ; 33(5): 838-857, 2024 May.
Article in English | MEDLINE | ID: mdl-38549457

ABSTRACT

Cluster randomization trials with survival endpoint are predominantly used in drug development and clinical care research when drug treatments or interventions are delivered at a group level. Unlike conventional cluster randomization design, stratified cluster randomization design is generally considered more effective in reducing the impacts of imbalanced baseline prognostic factors and varying cluster sizes between groups when these stratification factors are adopted in the design. Failure to account for stratification and cluster size variability may lead to underpowered analysis and inaccurate sample size estimation. Apart from the sample size estimation in unstratified cluster randomization trials, there are no development of an explicit sample size formula for survival endpoint when a stratified cluster randomization design is employed. In this article, we present a closed-form sample size formula based on the stratified cluster log-rank statistics for stratified cluster randomization trials with survival endpoint. It provides an integrated solution for sample size estimation that account for cluster size variation, baseline hazard heterogeneity, and the estimated intracluster correlation coefficient based on the preliminary data. Simulation studies show that the proposed formula provides the appropriate sample size for achieving the desired statistical power under various parameter configurations. A real example of a stratified cluster randomization trial in the population with stable coronary heart disease is presented to illustrate our method.


Subject(s)
Randomized Controlled Trials as Topic , Sample Size , Humans , Randomized Controlled Trials as Topic/statistics & numerical data , Cluster Analysis , Survival Analysis , Models, Statistical
18.
J Clin Epidemiol ; 169: 111308, 2024 May.
Article in English | MEDLINE | ID: mdl-38428542

ABSTRACT

OBJECTIVES: Ceiling effect may lead to misleading conclusions when using patient-reported outcome measure (PROM) scores as an outcome. The aim of this study was to investigate the potential source of ceiling effect-related errors in randomized controlled trials (RCTs) reporting no differences in PROM scores between study groups. STUDY DESIGN AND SETTING: A systematic review of RCTs published in the top 10 orthopedic journals according to their impact factors was conducted, focusing on studies that reported no significant differences in outcomes between two study groups. All studies published during 2012-2022 that reported no differences in PROM outcomes and used parametric statistical approach were included. The aim was to investigate the potential source of ceiling effect-related errors-that is, when the ceiling effect suppresses the possible difference between the groups. The proportions of patients exceeding the PROM scales were simulated using the observed dispersion parameters based on the assumed normal distribution, and the differences in the proportions between the study groups were subsequently analyzed. RESULTS: After an initial screening of 2343 studies, 190 studies were included. The central 95% theoretical distribution of the scores exceeded the PROM scales in 140 (74%) of these studies. In 33 (17%) studies, the simulated patient proportions exceeding the scales indicated potential differences between the compared groups. CONCLUSION: It is common to have a mismatch between the chosen PROM instrument and the population being studied increasing the risk of an unjustified "no difference" conclusion due to a ceiling effect. Thus, a considerable ceiling effect should be considered a potential source of error.


Subject(s)
Patient Reported Outcome Measures , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/standards
19.
Stat Med ; 43(11): 2083-2095, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38487976

ABSTRACT

To obtain valid inference following stratified randomisation, treatment effects should be estimated with adjustment for stratification variables. Stratification sometimes requires categorisation of a continuous prognostic variable (eg, age), which raises the question: should adjustment be based on randomisation categories or underlying continuous values? In practice, adjustment for randomisation categories is more common. We reviewed trials published in general medical journals and found none of the 32 trials that stratified randomisation based on a continuous variable adjusted for continuous values in the primary analysis. Using data simulation, this article evaluates the performance of different adjustment strategies for continuous and binary outcomes where the covariate-outcome relationship (via the link function) was either linear or non-linear. Given the utility of covariate adjustment for addressing missing data, we also considered settings with complete or missing outcome data. Analysis methods included linear or logistic regression with no adjustment for the stratification variable, adjustment for randomisation categories, or adjustment for continuous values assuming a linear covariate-outcome relationship or allowing for non-linearity using fractional polynomials or restricted cubic splines. Unadjusted analysis performed poorly throughout. Adjustment approaches that misspecified the underlying covariate-outcome relationship were less powerful and, alarmingly, biased in settings where the stratification variable predicted missing outcome data. Adjustment for randomisation categories tends to involve the highest degree of misspecification, and so should be avoided in practice. To guard against misspecification, we recommend use of flexible approaches such as fractional polynomials and restricted cubic splines when adjusting for continuous stratification variables in randomised trials.


Subject(s)
Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/statistics & numerical data , Computer Simulation , Linear Models , Data Interpretation, Statistical , Logistic Models , Random Allocation
20.
J Clin Epidemiol ; 169: 111300, 2024 May.
Article in English | MEDLINE | ID: mdl-38402998

ABSTRACT

OBJECTIVES: To determine whether clinical trial register (CTR) searches can accurately identify a greater number of completed randomized clinical trials (RCTs) than electronic bibliographic database (EBD) searches for systematic reviews of interventions, and to quantify the number of eligible ongoing trials. STUDY DESIGN AND SETTING: We performed an evaluation study and based our search for RCTs on the eligibility criteria of a systematic review that focused on the underrepresentation of people with chronic kidney disease in cardiovascular RCTs. We conducted a combined search of ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform through the Cochrane Central Register of Controlled Trials to identify eligible RCTs registered up to June 1, 2023. We searched Cochrane Central Register of Controlled Trials, EMBASE, and MEDLINE for publications of eligible RCTs published up to June 5, 2023. Finally, we compared the search results to determine the extent to which the two sources identified the same RCTs. RESULTS: We included 92 completed RCTs. Of these, 81 had results available. Sixty-six completed RCTs with available results were identified by both sources (81% agreement [95% CI: 71-88]). We identified seven completed RCTs with results exclusively by CTR search (9% [95% CI: 4-17]) and eight exclusively by EBD search (10% [95% CI: 5-18]). Eleven RCTs were completed but lacked results (four identified by both sources (36% [95% CI: 15-65]), one exclusively by EBD search (9% [95% CI: 1-38]), and six exclusively by CTR search (55% [95% CI: 28-79])). Also, we identified 42 eligible ongoing RCTs: 16 by both sources (38% [95% CI: 25-53]) and 26 exclusively by CTR search (62% [95% CI: 47-75]). Lastly, we identified four RCTs of unknown status by both sources. CONCLUSION: CTR searches identify a greater number of completed RCTs than EBD searches. Both searches missed some included RCTs. Based on our case study, researchers (eg, information specialists, systematic reviewers) aiming to identify all available RCTs should continue to search both sources. Once the barriers to performing CTR searches alone are targeted, CTR searches may be a suitable alternative.


Subject(s)
Databases, Bibliographic , Randomized Controlled Trials as Topic , Registries , Systematic Reviews as Topic , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/standards , Randomized Controlled Trials as Topic/methods , Humans , Systematic Reviews as Topic/methods , Databases, Bibliographic/statistics & numerical data , Registries/statistics & numerical data , Information Storage and Retrieval/methods , Information Storage and Retrieval/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL
...