Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Pharm Stat ; 2024 May 06.
Article in English | MEDLINE | ID: mdl-38708672

ABSTRACT

What can be considered an appropriate statistical method for the primary analysis of a randomized clinical trial (RCT) with a time-to-event endpoint when we anticipate non-proportional hazards owing to a delayed effect? This question has been the subject of much recent debate. The standard approach is a log-rank test and/or a Cox proportional hazards model. Alternative methods have been explored in the statistical literature, such as weighted log-rank tests and tests based on the Restricted Mean Survival Time (RMST). While weighted log-rank tests can achieve high power compared to the standard log-rank test, some choices of weights may lead to type-I error inflation under particular conditions. In addition, they are not linked to a mathematically unambiguous summary measure. Test statistics based on the RMST, on the other hand, allow one to investigate the average difference between two survival curves up to a pre-specified time point τ $$ \tau $$ -a mathematically unambiguous summary measure. However, by emphasizing differences prior to τ $$ \tau $$ , such test statistics may not fully capture the benefit of a new treatment in terms of long-term survival. In this article, we introduce a graphical approach for direct comparison of weighted log-rank tests and tests based on the RMST. This new perspective allows a more informed choice of the analysis method, going beyond power and type I error comparison.

2.
Biom J ; 65(4): e2200126, 2023 04.
Article in English | MEDLINE | ID: mdl-36732918

ABSTRACT

Delayed separation of survival curves is a common occurrence in confirmatory studies in immuno-oncology. Many novel statistical methods that aim to efficiently capture potential long-term survival improvements have been proposed in recent years. However, the vast majority do not consider stratification, which is a major limitation considering that most large confirmatory studies currently employ a stratified primary analysis. In this article, we combine recently proposed weighted log-rank tests that have been designed to work well under a delayed separation of survival curves, with stratification by a baseline variable. The aim is to increase the efficiency of the test when the stratifying variable is highly prognostic for survival. As there are many potential ways to combine the two techniques, we compare several possibilities in an extensive simulation study. We also apply the techniques retrospectively to two recent randomized clinical trials.


Subject(s)
Neoplasms , Humans , Retrospective Studies , Computer Simulation , Medical Oncology , Survival Analysis , Proportional Hazards Models
3.
JAMA Oncol ; 9(4): 571-572, 2023 04 01.
Article in English | MEDLINE | ID: mdl-36757691
4.
BMC Med Res Methodol ; 22(1): 228, 2022 08 15.
Article in English | MEDLINE | ID: mdl-35971069

ABSTRACT

BACKGROUND: Platform trials can evaluate the efficacy of several experimental treatments compared to a control. The number of experimental treatments is not fixed, as arms may be added or removed as the trial progresses. Platform trials are more efficient than independent parallel group trials because of using shared control groups. However, for a treatment entering the trial at a later time point, the control group is divided into concurrent controls, consisting of patients randomised to control when that treatment arm is in the platform, and non-concurrent controls, patients randomised before. Using non-concurrent controls in addition to concurrent controls can improve the trial's efficiency by increasing power and reducing the required sample size, but can introduce bias due to time trends. METHODS: We focus on a platform trial with two treatment arms and a common control arm. Assuming that the second treatment arm is added at a later time, we assess the robustness of recently proposed model-based approaches to adjust for time trends when utilizing non-concurrent controls. In particular, we consider approaches where time trends are modeled either as linear in time or as a step function, with steps at time points where treatments enter or leave the platform trial. For trials with continuous or binary outcomes, we investigate the type 1 error rate and power of testing the efficacy of the newly added arm, as well as the bias and root mean squared error of treatment effect estimates under a range of scenarios. In addition to scenarios where time trends are equal across arms, we investigate settings with different time trends or time trends that are not additive in the scale of the model. RESULTS: A step function model, fitted on data from all treatment arms, gives increased power while controlling the type 1 error, as long as the time trends are equal for the different arms and additive on the model scale. This holds even if the shape of the time trend deviates from a step function when patients are allocated to arms by block randomisation. However, if time trends differ between arms or are not additive to treatment effects in the scale of the model, the type 1 error rate may be inflated. CONCLUSIONS: The efficiency gained by using step function models to incorporate non-concurrent controls can outweigh potential risks of biases, especially in settings with small sample sizes. Such biases may arise if the model assumptions of equality and additivity of time trends are not satisfied. However, the specifics of the trial, scientific plausibility of different time trends, and robustness of results should be carefully considered.


Subject(s)
Sample Size , Bias , Humans
5.
Clin Trials ; 19(2): 201-210, 2022 04.
Article in English | MEDLINE | ID: mdl-35257619

ABSTRACT

BACKGROUND: A common feature of many recent trials evaluating the effects of immunotherapy on survival is that non-proportional hazards can be anticipated at the design stage. This raises the possibility to use a statistical method tailored towards testing the purported long-term benefit, rather than applying the more standard log-rank test and/or Cox model. Many such proposals have been made in recent years, but there remains a lack of practical guidance on implementation, particularly in the context of group-sequential designs. In this article, we aim to fill this gap. METHODS: We illustrate how the POPLAR trial, which compared immunotherapy versus chemotherapy in non-small-cell lung cancer, might have been re-designed to be more robust to the presence of a delayed effect using the modestly-weighted log-rank test in a group-sequential setting. CONCLUSION: We provide step-by-step instructions on how to analyse a hypothetical realization of the trial, based on this new design. Basic theory on weighted log-rank tests and group-sequential methods is covered, and an accompanying R package (including vignette) is provided.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Carcinoma, Non-Small-Cell Lung/drug therapy , Humans , Immunotherapy , Lung Neoplasms/drug therapy , Proportional Hazards Models , Survival Analysis
6.
Pharm Stat ; 20(3): 512-527, 2021 05.
Article in English | MEDLINE | ID: mdl-33350587

ABSTRACT

A fundamental concept in two-arm non-parametric survival analysis is the comparison of observed versus expected numbers of events on one of the treatment arms (the choice of which arm is arbitrary), where the expectation is taken assuming that the true survival curves in the two arms are identical. This concept is at the heart of the counting-process theory that provides a rigorous basis for methods such as the log-rank test. It is natural, therefore, to maintain this perspective when extending the log-rank test to deal with non-proportional hazards, for example, by considering a weighted sum of the "observed - expected" terms, where larger weights are given to time periods where the hazard ratio is expected to favor the experimental treatment. In doing so, however, one may stumble across some rather subtle issues, related to difficulties in the interpretation of hazard ratios, that may lead to strange conclusions. An alternative approach is to view non-parametric survival comparisons as permutation tests. With this perspective, one can easily improve on the efficiency of the log-rank test, while thoroughly controlling the false positive rate. In particular, for the field of immuno-oncology, where researchers often anticipate a delayed treatment effect, sample sizes could be substantially reduced without loss of power.


Subject(s)
Medical Oncology , Neoplasms , Humans , Neoplasms/therapy , Proportional Hazards Models , Sample Size , Survival Analysis
7.
Stat Methods Med Res ; 29(10): 2945-2957, 2020 10.
Article in English | MEDLINE | ID: mdl-32223528

ABSTRACT

An important step in the development of targeted therapies is the identification and confirmation of sub-populations where the treatment has a positive treatment effect compared to a control. These sub-populations are often based on continuous biomarkers, measured at baseline. For example, patients can be classified into biomarker low and biomarker high subgroups, which are defined via a threshold on the continuous biomarker. However, if insufficient information on the biomarker is available, the a priori choice of the threshold can be challenging and it has been proposed to consider several thresholds and to apply appropriate multiple testing procedures to test for a treatment effect in the corresponding subgroups controlling the family-wise type 1 error rate. In this manuscript we propose a framework to select optimal thresholds and corresponding optimized multiple testing procedures that maximize the expected power to identify at least one subgroup with a positive treatment effect. Optimization is performed over a prior on a family of models, modelling the relation of the biomarker with the expected outcome under treatment and under control. We find that for the considered scenarios 3 to 4 thresholds give the optimal power. If there is a prior belief on a small subgroup where the treatment has a positive effect, additional optimization of the spacing of thresholds may result in a large benefit. The procedure is illustrated with a clinical trial example in depression.


Subject(s)
Research Design , Biomarkers , Humans , Treatment Outcome
8.
Stat Med ; 38(20): 3782-3790, 2019 09 10.
Article in English | MEDLINE | ID: mdl-31131462

ABSTRACT

We propose a new class of weighted logrank tests (WLRTs) that control the risk of concluding that a new drug is more efficacious than standard of care, when, in fact, it is uniformly inferior. Perhaps surprisingly, this risk is not controlled for WLRT in general. Tests from this new class can be constructed to have high power under a delayed-onset treatment effect scenario, as well as being almost as efficient as the standard logrank test under proportional hazards.


Subject(s)
Randomized Controlled Trials as Topic/methods , Survival Analysis , Biometry/methods , Computer Simulation , Humans
9.
Cancer Chemother Pharmacol ; 83(4): 787-795, 2019 04.
Article in English | MEDLINE | ID: mdl-30758651

ABSTRACT

PURPOSE: Vistusertib is an orally bioavailable dual target of rapamycin complex (TORC) 1/2 kinase inhibitor currently under clinical investigation in various solid tumour and haematological malignancy settings. The pharmacokinetic, metabolic and excretion profiles of 14Carbon-isotope (14C)-labelled vistusertib were characterised in this open-label phase I patient study. METHODS: Four patients with advanced solid malignancies received a single oral solution dose of 14C-labelled vistusertib. Blood, urine, faeces, and saliva samples were collected at various time points during the 8-day in-patient period of the study. Safety and preliminary efficacy were also assessed. RESULTS: 14C-labelled vistusertib was rapidly absorbed following administration (time to maximum concentration (Tmax) < 1.2 h in all subjects). Overall, > 90% of radioactivity was recovered with the majority recovered as metabolites in faeces (on average 80% vs. 12% recovered in urine). The majority of circulating radioactivity (~ 78%) is unchanged vistusertib. Various morpholine-ring oxidation metabolites and an N-methylamide circulate at low concentrations [each < 10% area under the concentration-time curve from zero to infinity (AUC0-∞)]. No new or unexpected safety findings were observed; the most common adverse events were nausea and stomatitis. CONCLUSIONS: The pharmacokinetic (PK) profile of vistusertib is similar to previous studies using the same dosing regimen in solid malignancy patients. The majority of vistusertib elimination occurred via hepatic metabolic routes.


Subject(s)
Antineoplastic Agents/administration & dosage , Benzamides/administration & dosage , Morpholines/administration & dosage , Neoplasms/drug therapy , Protein Kinase Inhibitors/administration & dosage , Pyrimidines/administration & dosage , Administration, Oral , Aged , Antineoplastic Agents/pharmacokinetics , Area Under Curve , Benzamides/pharmacokinetics , Carbon Radioisotopes , Female , Humans , Male , Mechanistic Target of Rapamycin Complex 1/antagonists & inhibitors , Mechanistic Target of Rapamycin Complex 2/antagonists & inhibitors , Middle Aged , Morpholines/pharmacokinetics , Neoplasms/pathology , Protein Kinase Inhibitors/pharmacokinetics , Pyrimidines/pharmacokinetics
10.
PLoS One ; 11(2): e0146465, 2016.
Article in English | MEDLINE | ID: mdl-26863139

ABSTRACT

Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies.


Subject(s)
Clinical Trials as Topic/methods , Follow-Up Studies , Humans , Sample Size , Survival Analysis
11.
Stat Med ; 35(12): 1972-84, 2016 05 30.
Article in English | MEDLINE | ID: mdl-26694878

ABSTRACT

Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.


Subject(s)
Randomized Controlled Trials as Topic/methods , Research Design/statistics & numerical data , Sample Size , Single-Blind Method , Clinical Trials, Phase III as Topic/methods , Drug Labeling/standards , Drug Labeling/statistics & numerical data , Endpoint Determination , Fingolimod Hydrochloride/adverse effects , Fingolimod Hydrochloride/therapeutic use , Humans , Models, Statistical , Multiple Sclerosis, Relapsing-Remitting/drug therapy , Random Allocation , Randomized Controlled Trials as Topic/statistics & numerical data
12.
Stat Methods Med Res ; 25(2): 716-27, 2016 04.
Article in English | MEDLINE | ID: mdl-23242385

ABSTRACT

Multi-arm multi-stage designs can improve the efficiency of the drug-development process by evaluating multiple experimental arms against a common control within one trial. This reduces the number of patients required compared to a series of trials testing each experimental arm separately against control. By allowing for multiple stages experimental treatments can be eliminated early from the study if they are unlikely to be significantly better than control. Using the TAILoR trial as a motivating example, we explore a broad range of statistical issues related to multi-arm multi-stage trials including a comparison of different ways to power a multi-arm multi-stage trial; choosing the allocation ratio to the control group compared to other experimental arms; the consequences of adding additional experimental arms during a multi-arm multi-stage trial, and how one might control the type-I error rate when this is necessary; and modifying the stopping boundaries of a multi-arm multi-stage design to account for unknown variance in the treatment outcome. Multi-arm multi-stage trials represent a large financial investment, and so considering their design carefully is important to ensure efficiency and that they have a good chance of succeeding.


Subject(s)
Randomized Controlled Trials as Topic/methods , Research Design , Humans
13.
Trials ; 16: 522, 2015 Nov 16.
Article in English | MEDLINE | ID: mdl-26573827

ABSTRACT

BACKGROUND: Visceral leishmaniasis (VL) is a parasitic disease transmitted by sandflies and is fatal if left untreated. Phase II trials of new treatment regimens for VL are primarily carried out to evaluate safety and efficacy, while pharmacokinetic data are also important to inform future combination treatment regimens. The efficacy of VL treatments is evaluated at two time points, initial cure, when treatment is completed and definitive cure, commonly 6 months post end of treatment, to allow for slow response to treatment and detection of relapses. This paper investigates a generalization of the triangular design to impose a minimum sample size for pharmacokinetic or other analyses, and methods to estimate efficacy at extended follow-up accounting for the sequential design and changes in cure status during extended follow-up. METHODS: We provided R functions that generalize the triangular design to impose a minimum sample size before allowing stopping for efficacy. For estimation of efficacy at a second, extended, follow-up time, the performance of a shrinkage estimator (SHE), a probability tree estimator (PTE) and the maximum likelihood estimator (MLE) for estimation was assessed by simulation. RESULTS: The SHE and PTE are viable approaches to estimate an extended follow-up although the SHE performed better than the PTE: the bias and root mean square error were lower and coverage probabilities higher. CONCLUSIONS: Generalization of the triangular design is simple to implement for adaptations to meet requirements for pharmacokinetic analyses. Using the simple MLE approach to estimate efficacy at extended follow-up will lead to biased results, generally over-estimating treatment success. The SHE is recommended in trials of two or more treatments. The PTE is an acceptable alternative for one-arm trials or where use of the SHE is not possible due to computational complexity. TRIAL REGISTRATION: NCT01067443 , February 2010.


Subject(s)
Leishmaniasis, Visceral/drug therapy , Models, Statistical , Research Design/statistics & numerical data , Trypanocidal Agents/pharmacokinetics , Computer Simulation , Data Interpretation, Statistical , Humans , Kenya , Leishmaniasis, Visceral/diagnosis , Leishmaniasis, Visceral/parasitology , Likelihood Functions , Probability , Recurrence , Remission Induction , Sample Size , Sudan , Treatment Outcome , Trypanocidal Agents/administration & dosage
14.
Br J Haematol ; 167(4): 547-53, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25142093

ABSTRACT

Interindividual variations in dose requirements of oral vitamin K antagonists (VKA) are attributed to several factors, including genetic variant alleles of vitamin K epoxide reductase complex subunit 1 (VKORC1) and cytochrome P450 2C9 (CYP2C9), but also interaction with co-medications. In this context, proton pump inhibitor (PPI)-related alterations of VKA maintenance dose requirements have been published. The present investigation aimed to test for an interaction profile of oral VKA-therapy and PPIs in relation to the CYP2C9 genotype. Median weekly stable VKA dose requirements over 1 year were recorded in 69 patients. Patients were genotyped for CYP2C9*2, CYP2C9*3, VKORC1c.-1639G>A and VKORC1c.174-136C>T and assessed for an association with PPI use and total VKA maintenance dose requirements. PPI users with CYP2C9 genetic variations required significantly lower weekly VKA maintenance doses than those with the wild-type genotype (t-test: P = 0·02). In contrast, in subjects without PPI use, the CYP2C9 genotype had no significant influence on oral VKA dose requirements. Further, the combined CYP2C9/VKORC1 genotype was a significant predictor for VKA dose requirements [linear regression: estimate: -1·47, standard error: 0·58 (P = 0·01)]. In conclusion, in carriers of CYP2C9 gene variations, the interference with the VKA metabolism is modified by PPI co-medication and the VCKORC1 genotype. Preceding knowledge of the genetic profile and the awareness for potentially occurring severe over-anticoagulation problems under PPI co-medication could contribute to a safer and personalized VKA pharmacotherapy.


Subject(s)
Anticoagulants/administration & dosage , Cytochrome P-450 CYP2C9/genetics , Genotype , Proton Pump Inhibitors/administration & dosage , Vitamin K Epoxide Reductases/genetics , Vitamin K/antagonists & inhibitors , Administration, Oral , Aged , Cytochrome P-450 CYP2C9/metabolism , Female , Follow-Up Studies , Humans , Male , Middle Aged , Pilot Projects , Thrombosis/drug therapy , Thrombosis/genetics , Thrombosis/metabolism , Vitamin K Epoxide Reductases/metabolism
15.
Pharm Stat ; 10(4): 341-6, 2011.
Article in English | MEDLINE | ID: mdl-22328326

ABSTRACT

In a clinical trial, response-adaptive randomization (RAR) uses accumulating data to weigh the randomization of remaining patients in favour of the better performing treatment. The aim is to reduce the number of failures within the trial. However, many well-known RAR designs, in particular, the randomized play-the-winner-rule (RPWR), have a highly myopic structure which has sometimes led to unfortunate randomization sequences when used in practice. This paper introduces random permuted blocks into two RAR designs, the RPWR and sequential maximum likelihood estimation, for trials with a binary endpoint. Allocation ratios within each block are restricted to be one of 1:1, 2:1 or 3:1, preventing unfortunate randomization sequences. Exact calculations are performed to determine error rates and expected number of failures across a range of trial scenarios. The results presented show that when compared with equal allocation, block RAR designs give similar reductions in the expected number of failures to their unmodified counterparts. The reductions are typically modest under the alternative hypothesis but become more impressive if the treatment effect exceeds the clinically relevant difference.


Subject(s)
Endpoint Determination , Models, Statistical , Randomized Controlled Trials as Topic/methods , Humans , Likelihood Functions , Research Design , Treatment Failure
SELECTION OF CITATIONS
SEARCH DETAIL
...