Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 62
Filter
1.
Clin Trials ; 20(4): 380-393, 2023 08.
Article in English | MEDLINE | ID: mdl-37203150

ABSTRACT

There has been much interest in the evaluation of heterogeneous treatment effects (HTE) and multiple statistical methods have emerged under the heading of personalized/precision medicine combining ideas from hypothesis testing, causal inference, and machine learning over the past 10-15 years. We discuss new ideas and approaches for evaluating HTE in randomized clinical trials and observational studies using the features introduced earlier by Lipkovich, Dmitrienko, and D'Agostino that distinguish principled methods from simplistic approaches to data-driven subgroup identification and estimating individual treatment effects and use a case study to illustrate these approaches. We identified and provided a high-level overview of several classes of modern statistical approaches for personalized/precision medicine, elucidated the underlying principles and challenges, and compared findings for a case study across different methods. Different approaches to evaluating HTEs may produce (and actually produced) highly disparate results when applied to a specific data set. Evaluating HTE with machine learning methods presents special challenges since most of machine learning algorithms are optimized for prediction rather than for estimating causal effects. An additional challenge is in that the output of machine learning methods is typically a "black box" that needs to be transformed into interpretable personalized solutions in order to gain acceptance and usability.


Subject(s)
Precision Medicine , Research Design , Humans , Causality , Machine Learning , Algorithms
3.
Pharm Stat ; 21(5): 1090-1108, 2022 09.
Article in English | MEDLINE | ID: mdl-35322520

ABSTRACT

In this paper, we consider randomized controlled clinical trials comparing two treatments in efficacy assessment using a time to event outcome. We assume a relatively small number of candidate biomarkers available in the beginning of the trial, which may help define an efficacy subgroup which shows differential treatment effect. The efficacy subgroup is to be defined by one or two biomarkers and cut-offs that are unknown to the investigator and must be learned from the data. We propose a two-stage adaptive design with a pre-planned interim analysis and a final analysis. At the interim, several subgroup-finding algorithms are evaluated to search for a subgroup with enhanced survival for treated versus placebo. Conditional powers computed based on the subgroup and the overall population are used to make decision at the interim to terminate the study for futility, continue the study as planned, or conduct sample size recalculation for the subgroup or the overall population. At the final analysis, combination tests together with closed testing procedures are used to determine efficacy in the subgroup or the overall population. We conducted simulation studies to compare our proposed procedures with several subgroup-identification methods in terms of a novel utility function and several other measures. This research demonstrated the benefit of incorporating data-driven subgroup selection into adaptive clinical trial designs.


Subject(s)
Medical Futility , Research Design , Biomarkers/analysis , Clinical Trials as Topic , Humans , Sample Size
4.
Ther Innov Regul Sci ; 56(1): 65-75, 2022 01.
Article in English | MEDLINE | ID: mdl-34327673

ABSTRACT

Data-driven subgroup analysis plays an important role in clinical trials. This paper focuses on practical considerations in post-hoc subgroup investigations in the context of confirmatory clinical trials. The analysis is aimed at assessing the heterogeneity of treatment effects across the trial population and identifying patient subgroups with enhanced treatment benefit. The subgroups are defined using baseline patient characteristics, including demographic and clinical factors. Much progress has been made in the development of reliable statistical methods for subgroup investigation, including methods based on global models and recursive partitioning. The paper provides a review of principled approaches to data-driven subgroup identification and illustrates subgroup analysis strategies using a family of recursive partitioning methods known as the SIDES (subgroup identification based on differential effect search) methods. These methods are applied to a Phase III trial in patients with metastatic colorectal cancer. The paper discusses key considerations in subgroup exploration, including the role of covariate adjustment, subgroup analysis at early decision points and interpretation of subgroup search results in trials with a positive overall effect.


Subject(s)
Research Design , Data Interpretation, Statistical , Humans
5.
JACC Clin Electrophysiol ; 7(1): 16-25, 2021 01.
Article in English | MEDLINE | ID: mdl-33478708

ABSTRACT

OBJECTIVES: This study aimed to characterize corrected QT (QTc) prolongation in a cohort of hospitalized patients with coronavirus disease-2019 (COVID-19) who were treated with hydroxychloroquine and azithromycin (HCQ/AZM). BACKGROUND: HCQ/AZM is being widely used to treat COVID-19 despite the known risk of QT interval prolongation and the unknown risk of arrhythmogenesis in this population. METHODS: A retrospective cohort of COVID-19 hospitalized patients treated with HCQ/AZM was reviewed. The QTc interval was calculated before drug administration and for the first 5 days following initiation. The primary endpoint was the magnitude of QTc prolongation, and factors associated with QTc prolongation. Secondary endpoints were incidences of sustained ventricular tachycardia or ventricular fibrillation and all-cause mortality. RESULTS: Among 415 patients who received concomitant HCQ/AZM, the mean QTc increased from 443 ± 25 ms to a maximum of 473 ± 40 ms (87 [21%] patients had a QTc ≥500 ms). Factors associated with QTc prolongation ≥500 ms were age (p < 0.001), body mass index <30 kg/m2 (p = 0.005), heart failure (p < 0.001), elevated creatinine (p = 0.005), and peak troponin (p < 0.001). The change in QTc was not associated with death over the short period of the study in a population in which mortality was already high (hazard ratio: 0.998; p = 0.607). No primary high-grade ventricular arrhythmias were observed. CONCLUSIONS: An increase in QTc was seen in hospitalized patients with COVID-19 treated with HCQ/AZM. Several clinical factors were associated with greater QTc prolongation. Changes in QTc were not associated with increased risk of death.


Subject(s)
Anti-Bacterial Agents/adverse effects , Azithromycin/adverse effects , COVID-19 Drug Treatment , Enzyme Inhibitors/adverse effects , Hydroxychloroquine/adverse effects , Long QT Syndrome/chemically induced , Age Factors , Aged , Aged, 80 and over , Body Mass Index , COVID-19/epidemiology , Comorbidity , Creatinine/blood , Drug Therapy, Combination , Electrocardiography , Female , Heart Failure/epidemiology , Hospitalization , Humans , Long QT Syndrome/epidemiology , Male , Middle Aged , Mortality , Proportional Hazards Models , Risk Factors , SARS-CoV-2 , Troponin I/blood
6.
Ther Innov Regul Sci ; 54(3): 507-518, 2020 05.
Article in English | MEDLINE | ID: mdl-33301136

ABSTRACT

BACKGROUND: The analysis of subgroups in clinical trials is essential to assess differences in treatment effects for distinct patient clusters, that is, to detect patients with greater treatment benefit or patients where the treatment seems to be ineffective. METHODS: The software application subscreen (R package) has been developed to analyze the population of clinical trials in minute detail. The aim was to efficiently calculate point estimates (eg, hazard ratios) for multiple subgroups to identify groups that potentially differ from the overall trial result. The approach intentionally avoids inferential statistics such as P values or confidence intervals but intends to encourage discussions enriched with external evidence (eg, from other studies) about the exploratory results, which can be accompanied by further statistical methods in subsequent analyses. The subscreen application was applied to 2 clinical study data sets and used in a simulation study to demonstrate its usefulness. RESULTS: The visualization of numerous combined subgroups illustrates the homogeneity or heterogeneity of potentially all subgroup estimates with the overall result. With this, the application leads to more targeted planning of future trials. CONCLUSION: This described approach supports the current trend and requirements for the investigation of subgroup effects as discussed in the EMA draft guidance for subgroup analyses in confirmatory clinical trials (EMA 2014). The lack of a convenient tool to answer spontaneous questions from different perspectives can hinder an efficient discussion, especially in joint interdisciplinary study teams. With the new application, an easily executed but powerful tool is provided to fill this gap.

7.
Stat Methods Med Res ; 29(10): 2945-2957, 2020 10.
Article in English | MEDLINE | ID: mdl-32223528

ABSTRACT

An important step in the development of targeted therapies is the identification and confirmation of sub-populations where the treatment has a positive treatment effect compared to a control. These sub-populations are often based on continuous biomarkers, measured at baseline. For example, patients can be classified into biomarker low and biomarker high subgroups, which are defined via a threshold on the continuous biomarker. However, if insufficient information on the biomarker is available, the a priori choice of the threshold can be challenging and it has been proposed to consider several thresholds and to apply appropriate multiple testing procedures to test for a treatment effect in the corresponding subgroups controlling the family-wise type 1 error rate. In this manuscript we propose a framework to select optimal thresholds and corresponding optimized multiple testing procedures that maximize the expected power to identify at least one subgroup with a positive treatment effect. Optimization is performed over a prior on a family of models, modelling the relation of the biomarker with the expected outcome under treatment and under control. We find that for the considered scenarios 3 to 4 thresholds give the optimal power. If there is a prior belief on a small subgroup where the treatment has a positive effect, additional optimization of the spacing of thresholds may result in a large benefit. The procedure is illustrated with a clinical trial example in depression.


Subject(s)
Research Design , Biomarkers , Humans , Treatment Outcome
8.
Ther Innov Regul Sci ; : 2168479019853782, 2019 Jun 16.
Article in English | MEDLINE | ID: mdl-31204501

ABSTRACT

BACKGROUND: The analysis of subgroups in clinical trials is essential to assess differences in treatment effects for distinct patient clusters, that is, to detect patients with greater treatment benefit or patients where the treatment seems to be ineffective. METHODS: The software application subscreen (R package) has been developed to analyze the population of clinical trials in minute detail. The aim was to efficiently calculate point estimates (eg, hazard ratios) for multiple subgroups to identify groups that potentially differ from the overall trial result. The approach intentionally avoids inferential statistics such as P values or confidence intervals but intends to encourage discussions enriched with external evidence (eg, from other studies) about the exploratory results, which can be accompanied by further statistical methods in subsequent analyses. The subscreen application was applied to 2 clinical study data sets and used in a simulation study to demonstrate its usefulness. RESULTS: The visualization of numerous combined subgroups illustrates the homogeneity or heterogeneity of potentially all subgroup estimates with the overall result. With this, the application leads to more targeted planning of future trials. CONCLUSION: This described approach supports the current trend and requirements for the investigation of subgroup effects as discussed in the EMA draft guidance for subgroup analyses in confirmatory clinical trials (EMA 2014). The lack of a convenient tool to answer spontaneous questions from different perspectives can hinder an efficient discussion, especially in joint interdisciplinary study teams. With the new application, an easily executed but powerful tool is provided to fill this gap.

9.
Orphanet J Rare Dis ; 13(1): 186, 2018 10 25.
Article in English | MEDLINE | ID: mdl-30359266

ABSTRACT

Where there are a limited number of patients, such as in a rare disease, clinical trials in these small populations present several challenges, including statistical issues. This led to an EU FP7 call for proposals in 2013. One of the three projects funded was the Innovative Methodology for Small Populations Research (InSPiRe) project. This paper summarizes the main results of the project, which was completed in 2017.The InSPiRe project has led to development of novel statistical methodology for clinical trials in small populations in four areas. We have explored new decision-making methods for small population clinical trials using a Bayesian decision-theoretic framework to compare costs with potential benefits, developed approaches for targeted treatment trials, enabling simultaneous identification of subgroups and confirmation of treatment effect for these patients, worked on early phase clinical trial design and on extrapolation from adult to pediatric studies, developing methods to enable use of pharmacokinetics and pharmacodynamics data, and also developed improved robust meta-analysis methods for a small number of trials to support the planning, analysis and interpretation of a trial as well as enabling extrapolation between patient groups. In addition to scientific publications, we have contributed to regulatory guidance and produced free software in order to facilitate implementation of the novel methods.


Subject(s)
Rare Diseases , Research Design/statistics & numerical data , Humans
11.
Ther Innov Regul Sci ; 52(5): 560-571, 2018 09.
Article in English | MEDLINE | ID: mdl-29714565

ABSTRACT

BACKGROUND: The quality of data from clinical trials has received a great deal of attention in recent years. Of central importance is the need to protect the well-being of study participants and maintain the integrity of final analysis results. However, traditional approaches to assess data quality have come under increased scrutiny as providing little benefit for the substantial cost. Numerous regulatory guidance documents and industry position papers have described risk-based approaches to identify quality and safety issues. In particular, the position paper of TransCelerate BioPharma recommends defining risk thresholds to assess safety and quality risks based on past clinical experience. This exercise can be extremely time-consuming, and the resulting thresholds may only be relevant to a particular therapeutic area, patient or clinical site population. In addition, predefined thresholds cannot account for safety or quality issues where the underlying rate of observing a particular problem may change over the course of a clinical trial, and often do not consider varying patient exposure. METHODS: In this manuscript, we appropriate rules commonly utilized for funnel plots to define a traffic-light system for risk indicators based on statistical criteria that consider the duration of patient follow-up. Further, we describe how these methods can be adapted to assess changing risk over time. Finally, we illustrate numerous graphical approaches to summarize and communicate risk, and discuss hybrid clinical-statistical approaches to allow for the assessment of risk at sites with low patient enrollment. RESULTS: We illustrate the aforementioned methodologies for a clinical trial in patients with schizophrenia. CONCLUSIONS: Funnel plots are a flexible graphical technique that can form the basis for a risk-based strategy to assess data integrity, while considering site sample size, patient exposure, and changing risk across time.


Subject(s)
Clinical Trials as Topic , Patient Safety , Data Accuracy , Humans , Patient Dropouts
13.
J Biopharm Stat ; 28(1): 129-145, 2018.
Article in English | MEDLINE | ID: mdl-29283310

ABSTRACT

Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.


Subject(s)
Adaptive Clinical Trials as Topic/statistics & numerical data , Data Interpretation, Statistical , Gatekeeping , Models, Statistical , Research Design/statistics & numerical data , Decision Making , Humans
14.
J Biopharm Stat ; 28(1): 169-188, 2018.
Article in English | MEDLINE | ID: mdl-29125802

ABSTRACT

Given the importance of addressing multiplicity issues in confirmatory clinical trials, several recent publications focused on the general goal of identifying most appropriate methods for multiplicity adjustment in each individual setting. This goal can be accomplished using the Clinical Scenario Evaluation approach. This approach encourages trial sponsors to perform comprehensive assessments of applicable analysis strategies such as multiplicity adjustments under all plausible sets of statistical assumptions using relevant evaluation criteria. This two-part paper applies a novel class of criteria, known as criteria based on multiplicity penalties, to the problem of evaluating the performance of several candidate multiplicity adjustments. The ultimate goal of this evaluation is to identify efficient and robust adjustments for each individual trial and optimally select parameters of these adjustments. Part II focuses on advanced settings with several sources of multiplicity, for example, clinical trials with several endpoints evaluated at two or more doses of an experimental treatment. A case study is given to illustrate a penalty-based approach to evaluating candidate multiple testing procedures in advanced multiplicity problems.


Subject(s)
Clinical Trials, Phase III as Topic/statistics & numerical data , Data Interpretation, Statistical , Endpoint Determination/methods , Research Design/statistics & numerical data , Antipsychotic Agents/therapeutic use , Dose-Response Relationship, Drug , Humans , Lurasidone Hydrochloride/therapeutic use , Models, Statistical
15.
J Biopharm Stat ; 28(1): 146-168, 2018.
Article in English | MEDLINE | ID: mdl-29172961

ABSTRACT

Given the importance of addressing multiplicity issues in confirmatory clinical trials, several recent publications focused on the general goal of identifying most appropriate methods for multiplicity adjustment in each individual setting. This goal can be accomplished using the Clinical Scenario Evaluation approach. This approach encourages trial sponsors to perform comprehensive assessments of applicable analysis strategies such as multiplicity adjustments under all plausible sets of statistical assumptions using relevant evaluation criteria. This two-part paper applies a novel class of criteria, known as criteria based on multiplicity penalties, to the problem of evaluating the performance of several candidate multiplicity adjustments. The ultimate goal of this evaluation is to identify efficient and robust adjustments for each individual trial and optimally select parameters of these adjustments. Part I deals with traditional problems with a single source of multiplicity. Two case studies based on recently conducted Phase III trials are used to illustrate penalty-based approaches to evaluating candidate multiple testing methods and constructing optimization algorithms.


Subject(s)
Clinical Trials, Phase III as Topic/statistics & numerical data , Data Interpretation, Statistical , Drug Discovery/statistics & numerical data , Research Design/statistics & numerical data , Antipsychotic Agents/therapeutic use , Computer Simulation , Dose-Response Relationship, Drug , Fibrinolytic Agents/therapeutic use , Humans , Models, Statistical
16.
J Biopharm Stat ; 28(1): 63-81, 2018.
Article in English | MEDLINE | ID: mdl-29173045

ABSTRACT

The general topic of subgroup identification has attracted much attention in the clinical trial literature due to its important role in the development of tailored therapies and personalized medicine. Subgroup search methods are commonly used in late-phase clinical trials to identify subsets of the trial population with certain desirable characteristics. Post-hoc or exploratory subgroup exploration has been criticized for being extremely unreliable. Principled approaches to exploratory subgroup analysis based on recent advances in machine learning and data mining have been developed to address this criticism. These approaches emphasize fundamental statistical principles, including the importance of performing multiplicity adjustments to account for selection bias inherent in subgroup search. This article provides a detailed review of multiplicity issues arising in exploratory subgroup analysis. Multiplicity corrections in the context of principled subgroup search will be illustrated using the family of SIDES (subgroup identification based on differential effect search) methods. A case study based on a Phase III oncology trial will be presented to discuss the details of subgroup search algorithms with resampling-based multiplicity adjustment procedures.


Subject(s)
Clinical Trials, Phase III as Topic/statistics & numerical data , Endpoint Determination/methods , Patient Selection , Precision Medicine/statistics & numerical data , Randomized Controlled Trials as Topic/statistics & numerical data , Algorithms , Bias , Biomarkers/analysis , Data Interpretation, Statistical , Guidelines as Topic , Humans
17.
J Biopharm Stat ; 28(1): 113-128, 2018.
Article in English | MEDLINE | ID: mdl-29239689

ABSTRACT

It is increasingly common to encounter complex multiplicity problems with several multiplicity components in confirmatory Phase III clinical trials. These components are often based on several endpoints (primary and secondary endpoints) and several dose-control comparisons. When constructing a multiplicity adjustment in these settings, it is important to control the Type I error rate over all multiplicity components. An important class of multiple testing procedures, known as gatekeeping procedures, was derived using the mixture method that enables clinical trial sponsors to set up efficient multiplicity adjustments that account for clinically relevant logical relationships among the hypotheses of interest. An enhanced version of this mixture method is introduced in this paper to construct more powerful gatekeeping procedures for a specific type of logical relationships that rely on transitive serial restrictions. Restrictions of this kind are very common in Phase III clinical trials and the proposed method is applicable to a broad class of multiplicity problems. Several examples are provided to illustrate the new method and results of simulation trials are presented to compare the performance of gatekeeping procedures derived using this method and other available methods.


Subject(s)
Clinical Trials, Phase III as Topic/statistics & numerical data , Data Interpretation, Statistical , Endpoint Determination/methods , Humans , Models, Statistical
19.
Stat Med ; 36(28): 4446-4454, 2017 Dec 10.
Article in English | MEDLINE | ID: mdl-28762525

ABSTRACT

This paper deals with the general topic of subgroup analysis in late-stage clinical trials with emphasis on multiplicity considerations. The discussion begins with multiplicity issues arising in the context of exploratory subgroup analysis, including principled approaches to subgroup search that are applied as part of subgroup exploration exercises as well as in adaptive biomarker-driven designs. Key considerations in confirmatory subgroup analysis based on one or more pre-specified patient populations are reviewed, including a survey of multiplicity adjustment methods recommended in multi-population phase III clinical trials. Guidelines for interpretation of significant findings in several patient populations are introduced to facilitate the decision-making process and achieve consistent labeling across development programs. Copyright © 2017 John Wiley & Sons, Ltd.


Subject(s)
Clinical Trials as Topic/methods , Research Design , Biomarkers , Decision Theory , Endpoint Determination , Guidelines as Topic , Humans , Sample Size , Statistics, Nonparametric
20.
Stat Med ; 36(1): 136-196, 2017 01 15.
Article in English | MEDLINE | ID: mdl-27488683

ABSTRACT

It is well known that both the direction and magnitude of the treatment effect in clinical trials are often affected by baseline patient characteristics (generally referred to as biomarkers). Characterization of treatment effect heterogeneity plays a central role in the field of personalized medicine and facilitates the development of tailored therapies. This tutorial focuses on a general class of problems arising in data-driven subgroup analysis, namely, identification of biomarkers with strong predictive properties and patient subgroups with desirable characteristics such as improved benefit and/or safety. Limitations of ad-hoc approaches to biomarker exploration and subgroup identification in clinical trials are discussed, and the ad-hoc approaches are contrasted with principled approaches to exploratory subgroup analysis based on recent advances in machine learning and data mining. A general framework for evaluating predictive biomarkers and identification of associated subgroups is introduced. The tutorial provides a review of a broad class of statistical methods used in subgroup discovery, including global outcome modeling methods, global treatment effect modeling methods, optimal treatment regimes, and local modeling methods. Commonly used subgroup identification methods are illustrated using two case studies based on clinical trials with binary and survival endpoints. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Biomarkers/analysis , Biostatistics , Clinical Trials as Topic/statistics & numerical data , Research Design , Data Mining , Humans , Precision Medicine
SELECTION OF CITATIONS
SEARCH DETAIL
...