Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 76
Filter
1.
Diabetologia ; 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39037602

ABSTRACT

AIMS/HYPOTHESIS: Whether hypoglycaemia increases the risk of other adverse outcomes in diabetes remains controversial, especially for hypoglycaemia episodes not requiring assistance from another person. An objective of the Hypoglycaemia REdefining SOLutions for better liVEs (Hypo-RESOLVE) project was to create and use a dataset of pooled clinical trials in people with type 1 or type 2 diabetes to examine the association of exposure to all hypoglycaemia episodes across the range of severity with incident event outcomes: death, CVD, neuropathy, kidney disease, retinal disorders and depression. We also examined the change in continuous outcomes that occurred following a hypoglycaemia episode: change in eGFR, HbA1c, blood glucose, blood glucose variability and weight. METHODS: Data from 84 trials with 39,373 participants were pooled. For event outcomes, time-updated Cox regression models adjusted for age, sex, diabetes duration and HbA1c were fitted to assess association between: (1) outcome and cumulative exposure to hypoglycaemia episodes; and (2) outcomes where an acute effect might be expected (i.e. death, acute CVD, retinal disorders) and any hypoglycaemia exposure within the last 10 days. Exposures to any hypoglycaemia episode and to episodes of given severity (levels 1, 2 and 3) were examined. Further adjustment was then made for a wider set of potential confounders. The within-person change in continuous outcomes was also summarised (median of 40.4 weeks for type 1 diabetes and 26 weeks for type 2 diabetes). Analyses were conducted separately by type of diabetes. RESULTS: The maximally adjusted association analysis for type 1 diabetes found that cumulative exposure to hypoglycaemia episodes of any level was associated with higher risks of neuropathy, kidney disease, retinal disorders and depression, with risk ratios ranging from 1.55 (p=0.002) to 2.81 (p=0.002). Associations of a similar direction were found when level 1 episodes were examined separately but were significant for depression only. For type 2 diabetes cumulative exposure to hypoglycaemia episodes of any level was associated with higher risks of death, acute CVD, kidney disease, retinal disorders and depression, with risk ratios ranging from 2.35 (p<0.0001) to 3.00 (p<0.0001). These associations remained significant when level 1 episodes were examined separately. There was evidence of an association between hypoglycaemia episodes of any kind in the previous 10 days and death, acute CVD and retinal disorders in both type 1 and type 2 diabetes, with rate ratios ranging from 1.32 (p=0.017) to 2.68 (p<0.0001). These associations varied in magnitude and significance when examined separately by hypoglycaemia level. Within the range of hypoglycaemia defined by levels 1, 2 and 3, we could not find any evidence of a threshold at which risk of these consequences suddenly became pronounced. CONCLUSIONS/INTERPRETATION: These data are consistent with hypoglycaemia being associated with an increased risk of adverse events across several body systems in diabetes. These associations are not confined to severe hypoglycaemia requiring assistance.

2.
Stat Med ; 43(15): 2928-2943, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38742595

ABSTRACT

In clinical trials, multiple comparisons arising from various treatments/doses, subgroups, or endpoints are common. Typically, trial teams focus on the comparison showing the largest observed treatment effect, often involving a specific treatment pair and endpoint within a subgroup. These findings frequently lead to follow-up pivotal studies, many of which do not confirm the initial positive results. Selection bias occurs when the most promising treatment, subgroup, or endpoint is chosen for further development, potentially skewing subsequent investigations. Such bias can be defined as the deviation in the observed treatment effects from the underlying truth. In this article, we propose a general and unified Bayesian framework to address selection bias in clinical trials with multiple comparisons. Our approach does not require a priori specification of a parametric distribution for the prior, offering a more flexible and generalized solution. The proposed method facilitates a more accurate interpretation of clinical trial results by adjusting for such selection bias. Through simulation studies, we compared several methods and demonstrated their superior performance over the normal shrinkage estimator. We recommended the use of Bayesian Model Averaging estimator averaging over Gaussian Mixture Models as the prior distribution based on its performance and flexibility. We applied the method to a multicenter, randomized, double-blind, placebo-controlled study investigating the cardiovascular effects of dulaglutide.


Subject(s)
Bayes Theorem , Computer Simulation , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/statistics & numerical data , Models, Statistical , Double-Blind Method , Selection Bias , Bias , Multicenter Studies as Topic , Clinical Trials as Topic/statistics & numerical data
3.
Diabetologia ; 2024 May 25.
Article in English | MEDLINE | ID: mdl-38795153

ABSTRACT

AIMS/HYPOTHESIS: The objective of the Hypoglycaemia REdefining SOLutions for better liVES (Hypo-RESOLVE) project is to use a dataset of pooled clinical trials across pharmaceutical and device companies in people with type 1 or type 2 diabetes to examine factors associated with incident hypoglycaemia events and to quantify the prediction of these events. METHODS: Data from 90 trials with 46,254 participants were pooled. Analyses were done for type 1 and type 2 diabetes separately. Poisson mixed models, adjusted for age, sex, diabetes duration and trial identifier were fitted to assess the association of clinical variables with hypoglycaemia event counts. Tree-based gradient-boosting algorithms (XGBoost) were fitted using training data and their predictive performance in terms of area under the receiver operating characteristic curve (AUC) evaluated on test data. Baseline models including age, sex and diabetes duration were compared with models that further included a score of hypoglycaemia in the first 6 weeks from study entry, and full models that included further clinical variables. The relative predictive importance of each covariate was assessed using XGBoost's importance procedure. Prediction across the entire trial duration for each trial (mean of 34.8 weeks for type 1 diabetes and 25.3 weeks for type 2 diabetes) was assessed. RESULTS: For both type 1 and type 2 diabetes, variables associated with more frequent hypoglycaemia included female sex, white ethnicity, longer diabetes duration, treatment with human as opposed to analogue-only insulin, higher glucose variability, higher score for hypoglycaemia across the 6 week baseline period, lower BP, lower lipid levels and treatment with psychoactive drugs. Prediction of any hypoglycaemia event of any severity was greater than prediction of hypoglycaemia requiring assistance (level 3 hypoglycaemia), for which events were sparser. For prediction of level 1 or worse hypoglycaemia during the whole follow-up period, the AUC was 0.835 (95% CI 0.826, 0.844) in type 1 diabetes and 0.840 (95% CI 0.831, 0.848) in type 2 diabetes. For level 3 hypoglycaemia, the AUC was lower at 0.689 (95% CI 0.667, 0.712) for type 1 diabetes and 0.705 (95% CI 0.662, 0.748) for type 2 diabetes. Compared with the baseline models, almost all the improvement in prediction could be captured by the individual's hypoglycaemia history, glucose variability and blood glucose over a 6 week baseline period. CONCLUSIONS/INTERPRETATION: Although hypoglycaemia rates show large variation according to sociodemographic and clinical characteristics and treatment history, looking at a 6 week period of hypoglycaemia events and glucose measurements predicts future hypoglycaemia risk.

5.
Biom J ; 66(1): e2200103, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37740165

ABSTRACT

Although clinical trials are often designed with randomization and well-controlled protocols, complications will inevitably arise in the presence of intercurrent events (ICEs) such as treatment discontinuation. These can lead to missing outcome data and possibly confounding causal inference when the missingness is a function of a latent stratification of patients defined by intermediate outcomes. The pharmaceutical industry has been focused on developing new methods that can yield pertinent causal inferences in trials with ICEs. However, it is difficult to compare the properties of different methods developed in this endeavor as real-life clinical trial data cannot be easily shared to provide benchmark data sets. Furthermore, different methods consider distinct assumptions for the underlying data-generating mechanisms, and simulation studies often are customized to specific situations or methods. We develop a novel, general simulation model and corresponding Shiny application in R for clinical trials with ICEs, aptly named the Clinical Trials with Intercurrent Events Simulator (CITIES). It is formulated under the Rubin Causal Model where the considered treatment effects account for ICEs in clinical trials with repeated measures. CITIES facilitates the effective generation of data that resemble real-life clinical trials with respect to their reported summary statistics, without requiring the use of the original trial data. We illustrate the utility of CITIES via two case studies involving real-life clinical trials that demonstrate how CITIES provides a comprehensive tool for practitioners in the pharmaceutical industry to compare methods for the analysis of clinical trials with ICEs on identical, benchmark settings that resemble real-life trials.


Subject(s)
Research Design , Humans , Cities , Computer Simulation
6.
Ther Innov Regul Sci ; 58(1): 127-135, 2024 01.
Article in English | MEDLINE | ID: mdl-37751063

ABSTRACT

The dose-response curve has been studied extensively for decades. However, most of these methods ignore intermediate measurements of the response variable and only use the measurement at the endpoint. In early phase trials, it is crucial to utilize all available data due to the smaller sample size. Simulation studies have shown that the longitudinal dose-response surface model provides a more precise parameter estimation compared to the traditional dose response using only data from the primary time point. However, the current longitudinal models with parametric assumptions assume the treatment effect increases monotonically over time, which may be controversial to reality. We propose a parametric non-monotone exponential time (NEXT) model, an enhanced longitudinal dose-response model with greater flexibility, capable of accommodating non-monotonic treatment effects and making predictions for longer-term efficacy. In addition, the estimator for the time to maximum treatment effect and its asymptotic distribution have been derived from NEXT. Extensive simulation studies using known data-generating models and using real clinical data showed the NEXT model outperformed the existing monotone longitudinal models.


Subject(s)
Computer Simulation , Sample Size
7.
Ther Innov Regul Sci ; 57(5): 1008-1016, 2023 09.
Article in English | MEDLINE | ID: mdl-37266869

ABSTRACT

Binary-valued outcome is often seen in many clinical trials across therapeutic areas. It is not uncommon that such binary endpoints are derived from a continuous variable. For example, in diabetes clinical trials, the proportion of patients with HbA1c< 7% is often investigated as one of the key objectives, where HbA1c is a continuous-valued variable reflecting the averaged blood glucose value from the previous three months. Most of the time, if not all, the mean of those binary endpoints were estimated directly through the binary variable defined by the corresponding cutoff. Alternatively, by the nature of the derivation, that quantity could also be estimated by leveraging the density of the underlying continuous variable and computing the area under the density curve up to a threshold. This paper provides a few methods in relation to density estimation. Extensive simulation studies were conducted based on real clinical trial data to compare these estimation approaches against the direct estimation of the proportions. Simulation results showed that the density estimation approaches in general benefited from a smaller mean squared error in early phase studies where the sample size is limited. The density estimation approach is certainly expected to introduce bias, however, a favorable bias-variance trade-off may make these approaches attractive in early phase studies.


Subject(s)
Glycated Hemoglobin , Humans , Bias , Sample Size , Computer Simulation
8.
Ther Innov Regul Sci ; 57(3): 521-528, 2023 05.
Article in English | MEDLINE | ID: mdl-36542287

ABSTRACT

BACKGROUND: Reasons for treatment discontinuation are important not only to understand the benefit and risk profile of experimental treatments, but also to help choose appropriate strategies to handle intercurrent events in defining estimands. The current case report form (CRF) commonly in use mixes the underlying reasons for treatment discontinuation and who makes the decision for treatment discontinuation, often resulting in an inaccurate collection of reasons for treatment discontinuation. METHODS AND RESULTS: We systematically reviewed and analyzed treatment discontinuation data from nine phase 2 and phase 3 studies for insulin peglispro. A total of 857 participants with treatment discontinuation were included in the analysis. Our review suggested that, due to the vague multiple-choice options for treatment discontinuation present in the CRF, different reasons were sometimes recorded for the same underlying reason for treatment discontinuation. Based on our review and analysis, we suggest an intermediate solution and a more systematic way to improve the current CRF for treatment discontinuations. CONCLUSION: This research provides insight and directions on how to optimize the CRF for recording treatment discontinuation. Further work needs to be done to build the learning into Clinical Data Interchange Standards Consortium standards. CLINICAL TRIALS: Clinicaltrials.gov numbers: NCT01027871 (Phase 2 for type 2 diabetes), NCT01049412 (Phase 2 for type 1 diabetes), NCT01481779 (IMAGINE 1 Study), NCT01435616 (IMAGINE 2 Study), NCT01454284 (IMAGINE 3 Study), NCT01468987 (IMAGINE 4 Study), NCT01582451 (IMAGINE 5 Study), NCT01790438 (IMAGINE 6 Study), NCT01792284 (IMAGINE 7 Study).


Subject(s)
Diabetes Mellitus, Type 1 , Diabetes Mellitus, Type 2 , Humans , Diabetes Mellitus, Type 2/drug therapy , Clinical Trials, Phase II as Topic , Clinical Trials, Phase III as Topic , Diabetes Mellitus, Type 1/drug therapy , Insulin Lispro/therapeutic use
9.
Stat Med ; 41(19): 3837-3877, 2022 Aug 30.
Article in English | MEDLINE | ID: mdl-35851717

ABSTRACT

The ICH E9(R1) addendum (2019) proposed principal stratification (PS) as one of five strategies for dealing with intercurrent events. Therefore, understanding the strengths, limitations, and assumptions of PS is important for the broad community of clinical trialists. Many approaches have been developed under the general framework of PS in different areas of research, including experimental and observational studies. These diverse applications have utilized a diverse set of tools and assumptions. Thus, need exists to present these approaches in a unifying manner. The goal of this tutorial is threefold. First, we provide a coherent and unifying description of PS. Second, we emphasize that estimation of effects within PS relies on strong assumptions and we thoroughly examine the consequences of these assumptions to understand in which situations certain assumptions are reasonable. Finally, we provide an overview of a variety of key methods for PS analysis and use a real clinical trial example to illustrate them. Examples of code for implementation of some of these approaches are given in Supplemental Materials.

10.
Ther Innov Regul Sci ; 56(5): 744-752, 2022 09.
Article in English | MEDLINE | ID: mdl-35608729

ABSTRACT

BACKGROUND: Decentralized clinical trials offer the promise of reduced patient burden, faster and more diverse recruitment, and have received regulatory support during the COVID-19 pandemic. However, lack of data accuracy or data validation poses a challenge for fully decentralized trials. A mixed data collection modality where onsite measurements are collected at key time points and decentralized measurements are taken at intermediate time points is attractive operationally. To date, the impact of decentralized measurements (which could presumably be less accurate) taken at intermediate time points on statistical inference on the primary or other key time points has not been evaluated. METHODS: In this article we evaluate the estimation and statistical inference for three scenarios: (1) all onsite measurements, (2) a mixture of onsite and decentralized measurements, and (3) all decentralized measurements, in the setting of a chronic weight management trial. We consider scenarios where decentralized measurements have additional within- and between-subject variabilities and/or bias. RESULTS: In the mixed modality setting, simulation studies showed that the estimation and inference for the key time points with onsite measurements have good properties and are not impacted by the additional variability and bias from intermediate decentralized measurements. However, estimates for intermediate decentralized time points for the mixed modality and estimates for the all decentralized modality measurements have increased variability and bias. CONCLUSION: Mixed modality trials can help achieve the benefits of decentralized clinical trials by reducing the number of onsite visits with little impact on statistical inferences for various estimands, compared to traditional (all onsite) clinical trials.


Subject(s)
Clinical Trials as Topic , Data Collection , Bias , COVID-19 , Computer Simulation , Humans , Pandemics
11.
Pharm Stat ; 21(5): 907-918, 2022 09.
Article in English | MEDLINE | ID: mdl-35277928

ABSTRACT

In many clinical trials, outcomes of interest are binary-valued. It is not uncommon that a binary-valued outcome is dichotomized from a continuous outcome at a threshold of clinical interest. To analyze such data, common approaches include (a) fitting a generalized linear mixed model (GLMM) to the dichotomized longitudinal binary outcome; and (b) the multiple imputation (MI) based method: imputing missing values in the continuous outcome, dichotomizing it into a binary outcome, and then fitting a generalized linear model to the "complete" data. We conducted comprehensive simulation studies to compare the performance of the GLMM versus the MI-based method for estimating the risk difference and the logarithm of odds ratio between two treatment arms at the end of study. In those simulation studies, we considered a range of multivariate distribution options for the continuous outcome (including a multivariate normal distribution, a multivariate t-distribution, a multivariate log-normal distribution, and the empirical distribution from a real clinical trial data) to evaluate the robustness of the estimators to various data-generating models. Simulation results demonstrate that both methods work well under those considered distribution options, but the MI-based method is more efficient with smaller mean squared errors compared to the GLMM. We further applied both the GLMM and the MI-based method to 29 phase 3 diabetes clinical trials, and found that the MI-based method generally led to smaller variance estimates compared to the GLMM.


Subject(s)
Data Interpretation, Statistical , Computer Simulation , Humans , Linear Models , Normal Distribution
12.
Pharm Stat ; 21(3): 641-653, 2022 05.
Article in English | MEDLINE | ID: mdl-34985825

ABSTRACT

Return-to-baseline is an important method to impute missing values or unobserved potential outcomes when certain hypothetical strategies are used to handle intercurrent events in clinical trials. Current return-to-baseline approaches seen in literature and in practice inflate the variability of the "complete" dataset after imputation and lead to biased mean estimators when the probability of missingness depends on the observed baseline and/or postbaseline intermediate outcomes. In this article, we first provide a set of criteria a return-to-baseline imputation method should satisfy. Under this framework, we propose a novel return-to-baseline imputation method. Simulations show the completed data after the new imputation approach have the proper distribution, and the estimators based on the new imputation method outperform the traditional method in terms of both bias and variance, when missingness depends on the observed values. The new method can be implemented easily with the existing multiple imputation procedures in commonly used statistical packages.


Subject(s)
Research Design , Bias , Clinical Trials as Topic , Data Interpretation, Statistical , Humans , Probability
13.
Pharm Stat ; 21(1): 4-16, 2022 01.
Article in English | MEDLINE | ID: mdl-34268857

ABSTRACT

Phase 2 and 3 development failure is one of the key factors for high drug development cost. Robust prediction of a candidate drug's efficacy and safety profile could potentially improve the success rate of the drug development. Therefore, systematic evaluation of the prediction is important for learning and continuous improvement of the prediction. In this article, we proposed a set of unified criteria that allow to evaluate the predictions across different endpoints, indications and development stages: standardized bias (SB), standardized mean squared errors (SMSE), and credibility of prediction. We applied the SB and SMSE to the predicted treatment effects for 54 comparisons in 5 compounds in immunology and diabetes.


Subject(s)
Drug Development , Bias
14.
Pharm Stat ; 21(3): 525-534, 2022 05.
Article in English | MEDLINE | ID: mdl-34927339

ABSTRACT

Randomized controlled trials are considered the gold standard to evaluate the treatment effect (estimand) for efficacy and safety. According to the recent International Council on Harmonization (ICH)-E9 addendum (R1), intercurrent events (ICEs) need to be considered when defining an estimand, and principal stratum is one of the five strategies to handle ICEs. Qu et al. (2020, Statistics in Biopharmaceutical Research 12:1-18) proposed estimators for the adherer average causal effect (AdACE) for estimating the treatment difference for those who adhere to one or both treatments based on the causal-inference framework, and demonstrated the consistency of those estimators; however, this method requires complex custom programming related to high-dimensional numeric integrations. In this article, we implemented the AdACE estimators using multiple imputation (MI) and constructed confidence intervals (CIs) through bootstrapping. A simulation study showed that the MI-based estimators provided consistent estimators with the nominal coverage probabilities of CIs for the treatment difference for the adherent populations of interest. As an illustrative example, the new method was applied to data from a real clinical trial comparing two types of basal insulin for patients with type 1 diabetes.


Subject(s)
Research Design , Causality , Computer Simulation , Data Interpretation, Statistical , Humans , Probability
15.
Ther Innov Regul Sci ; 55(5): 984-988, 2021 09.
Article in English | MEDLINE | ID: mdl-33983621

ABSTRACT

The current COVID-19 pandemic poses numerous challenges for ongoing clinical trials and provides a stress-testing environment for the existing principles and practice of estimands in clinical trials. The pandemic may increase the rate of intercurrent events (ICEs) and missing values, spurring a great deal of discussion on amending protocols and statistical analysis plans to address these issues. In this article, we revisit recent research on estimands and handling of missing values, especially the ICH E9 (R1) Addendum on Estimands and Sensitivity Analysis in Clinical Trials. Based on an in-depth discussion of the strategies for handling ICEs using a causal inference framework, we suggest some improvements in applying the estimand and estimation framework in ICH E9 (R1). Specifically, we discuss a mix of strategies allowing us to handle ICEs differentially based on reasons for ICEs. We also suggest ICEs should be handled primarily by hypothetical strategies and provide examples of different hypothetical strategies for different types of ICEs as well as a road map for estimation and sensitivity analyses. We conclude that the proposed framework helps streamline translating clinical objectives into targets of statistical inference and automatically resolves many issues with defining estimands and choosing estimation procedures arising from events such as the pandemic.


Subject(s)
COVID-19 , Pandemics , Data Interpretation, Statistical , Humans , Research Design , SARS-CoV-2
16.
Pharm Stat ; 20(1): 55-67, 2021 01.
Article in English | MEDLINE | ID: mdl-33442928

ABSTRACT

Intercurrent events (ICEs) and missing values are inevitable in clinical trials of any size and duration, making it difficult to assess the treatment effect for all patients in randomized clinical trials. Defining the appropriate estimand that is relevant to the clinical research question is the first step in analyzing data. The tripartite estimands, which evaluate the treatment differences in the proportion of patients with ICEs due to adverse events, the proportion of patients with ICEs due to lack of efficacy, and the primary efficacy outcome for those who can adhere to study treatment under the causal inference framework, are of interest to many stakeholders in understanding the totality of treatment effects. In this manuscript, we discuss the details of how to estimate tripartite estimands based on a causal inference framework and how to interpret tripartite estimates through a phase 3 clinical study evaluating a basal insulin treatment for patients with type 1 diabetes.


Subject(s)
Research Design , Causality , Data Interpretation, Statistical , Humans
17.
J Biopharm Stat ; 31(1): 5-13, 2021 01 02.
Article in English | MEDLINE | ID: mdl-32419590

ABSTRACT

Hypoglycemia is a major safety concern for diabetic patients. Hypoglycemic events can be modeled based on time to recurrent events or count data. In this article, we evaluated a gamma frailty model with variance estimated by the inverse of observed Fisher information matrix, a gamma frailty model with the sandwich variance estimator, and a piecewise negative binomial regression model. Simulations showed that the sandwich variance estimator performed better when the frailty model is mis-specified, and the piecewise negative binomial regression sometimes fails to converge. All three methods were applied to a dataset from a clinical trial evaluating insulin treatments.


Subject(s)
Hypoglycemia , Humans , Hypoglycemia/epidemiology , Models, Statistical , Recurrence
18.
Biom J ; 63(1): 105-121, 2021 01.
Article in English | MEDLINE | ID: mdl-33200481

ABSTRACT

One of the central aims in randomized clinical trials is to find well-validated surrogate endpoints to reduce the sample size and/or duration of trials. Clinical researchers and practitioners have proposed various surrogacy measures for assessing candidate surrogate endpoints. However, most existing surrogacy measures have the following shortcomings: (i) they often fall outside the range [0,1], (ii) they are imprecisely estimated, and (iii) they ignore the interaction associations between a treatment and candidate surrogate endpoints in the evaluation of the surrogacy level. To overcome these difficulties, we propose a new surrogacy measure, the proportion of treatment effect mediated by candidate surrogate endpoints (PMS), based on the decomposition of the treatment effect into direct, indirect, and interaction associations mediated by candidate surrogate endpoints. In addition, we validate the advantages of PMS through Monte Carlo simulations and the application to empirical data from ORIENT (the Olmesartan Reducing Incidence of Endstage Renal Disease in Diabetic Nephropathy Trial).


Subject(s)
Biomarkers , Humans , Incidence , Randomized Controlled Trials as Topic , Treatment Outcome
19.
Pharm Stat ; 20(2): 314-323, 2021 03.
Article in English | MEDLINE | ID: mdl-33098267

ABSTRACT

Randomized controlled trials (RCTs) are the gold standard for evaluation of the efficacy and safety of investigational interventions. If every patient in an RCT were to adhere to the randomized treatment, one could simply analyze the complete data to infer the treatment effect. However, intercurrent events (ICEs) including the use of concomitant medication for unsatisfactory efficacy, treatment discontinuation due to adverse events, or lack of efficacy may lead to interventions that deviate from the original treatment assignment. Therefore, defining the appropriate estimand (the appropriate parameter to be estimated) based on the primary objective of the study is critical prior to determining the statistical analysis method and analyzing the data. The International Council for Harmonisation (ICH) E9 (R1), adopted on November 20, 2019, provided five strategies to define the estimand: treatment policy, hypothetical, composite variable, while on treatment, and principal stratum. In this article, we propose an estimand using a mix of strategies in handling ICEs. This estimand is an average of the "null" treatment difference for those with ICEs potentially related to safety and the treatment difference for the other patients if they would complete the assigned treatments. Two examples from clinical trials evaluating antidiabetes treatments are provided to illustrate the estimation of this proposed estimand and to compare it with the estimates for estimands using hypothetical and treatment policy strategies in handling ICEs.


Subject(s)
Clinical Trials as Topic , Research Design , Data Interpretation, Statistical , Humans , Randomized Controlled Trials as Topic
20.
Stat Med ; 39(30): 4593-4604, 2020 12 30.
Article in English | MEDLINE | ID: mdl-32940369

ABSTRACT

It has long been noticed that the efficacy observed in small early phase studies is generally better than that observed in later larger studies. Historically, the inflation of the efficacy results from early proof-of-concept studies is either ignored, or adjusted empirically using a frequentist or Bayesian approach. In this article, we systematically explained the underlying reason for the inflation of efficacy results in small early phase studies from the perspectives of measurement error models and selection bias. A systematic method was built to adjust the early phase study results from both frequentist and Bayesian perspectives. A hierarchical model was proposed to estimate the distribution of the efficacy for a portfolio of compounds, which can serve as the prior distribution for the Bayesian approach. We showed through theory that the systematic adjustment provides an unbiased estimator for the true mean efficacy for a portfolio of compounds. The adjustment was applied to paired data for the efficacy in early small and later larger studies for a set of compounds in diabetes and immunology. After the adjustment, the bias in the early phase small studies seems to be diminished.


Subject(s)
Models, Statistical , Research Design , Bayes Theorem , Bias , Humans , Selection Bias
SELECTION OF CITATIONS
SEARCH DETAIL
...