Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Pharm Stat ; 2024 Jul 11.
Article in English | MEDLINE | ID: mdl-38992926

ABSTRACT

Clinical trials with continuous primary endpoints typically measure outcomes at baseline, at a fixed timepoint (denoted Tmin), and at intermediate timepoints. The analysis is commonly performed using the mixed model repeated measures method. It is sometimes expected that the effect size will be larger with follow-up longer than Tmin. But extending the follow-up for all patients delays trial completion. We propose an alternative trial design and analysis method that potentially increases statistical power without extending the trial duration or increasing the sample size. We propose following the last enrolled patient until Tmin, with earlier enrollees having variable follow-up durations up to a maximum of Tmax. The sample size at Tmax will be smaller than at Tmin, and due to staggered enrollment, data missing at Tmax will be missing completely at random. For analysis, we propose an alpha-adjusted procedure based on the smaller of the p values at Tmin and Tmax, termed minP $$ minP $$ . This approach can provide the highest power when the powers at Tmin and Tmax are similar. If the power at Tmin and Tmax differ significantly, the power of minP $$ minP $$ is modestly reduced compared with the larger of the two powers. Rare disease trials, due to the limited size of the patient population, may benefit the most with this design.

2.
Mol Genet Metab ; 139(3): 107612, 2023 07.
Article in English | MEDLINE | ID: mdl-37245378

ABSTRACT

Clinical trial development in rare diseases poses significant study design and methodology challenges, such as disease heterogeneity and appropriate patient selection, identification and selection of key endpoints, decisions on study duration, choice of control groups, selection of appropriate statistical analyses, and patient recruitment. Therapeutic development in organic acidemias (OAs) shares many challenges with other inborn errors of metabolism, such as incomplete understanding of natural history, heterogenous disease presentations, requirement for sensitive outcome measures and difficulties recruiting a small sample of participants. Here, we review strategies for the successful development of a clinical trial to evaluate treatment response in propionic and methylmalonic acidemias. Specifically, we discuss crucial decisions that may significantly impact success of the study, including patient selection, identification and selection of endpoints, determination of the study duration, consideration of control groups including natural history controls, and selection of appropriate statistical analyses. The significant challenges associated with designing a clinical trial in rare disease can sometimes be successfully met through strategic engagement with experts in the rare disease, seeking regulatory and biostatistical guidance, and early involvement of patients and families.


Subject(s)
Amino Acid Metabolism, Inborn Errors , Propionic Acidemia , Humans , Propionic Acidemia/genetics , Propionic Acidemia/therapy , Rare Diseases/therapy , Amino Acid Metabolism, Inborn Errors/genetics , Amino Acid Metabolism, Inborn Errors/therapy , Research Design , Methylmalonic Acid
3.
Front Neurol ; 14: 1098454, 2023.
Article in English | MEDLINE | ID: mdl-36970548

ABSTRACT

Substantial challenges in study design and methodology exist during clinical trial development to examine treatment response in patients with a rare disease, especially those with predominant central nervous system involvement and heterogeneity in clinical manifestations and natural history. Here we discuss crucial decisions which may significantly impact success of the study, including patient selection and recruitment, identification and selection of endpoints, determination of the study duration, consideration of control groups including natural history controls, and selection of appropriate statistical analyses. We review strategies for the successful development of a clinical trial to evaluate treatment of a rare disease with a focus on inborn errors of metabolism (IEMs) that present with movement disorders. The strategies presented using pantothenate kinase-associated neurodegeneration (PKAN) as the rare disease example can be applied to other rare diseases, particularly IEMs with movement disorders (e.g., other neurodegeneration with brain iron accumulation disorders, lysosomal storage disorders). The significant challenges associated with designing a clinical trial in rare disease can sometimes be successfully met through strategic engagement with experts in the rare disease, seeking regulatory and biostatistical guidance, and early involvement of patients and families. In addition to these strategies, we discuss the urgent need for a paradigm shift within the regulatory processes to help accelerate medical product development and bring new innovations and advances to patients with rare neurodegenerative diseases who need them earlier in disease progression and prior to clinical manifestations.

4.
Stat Med ; 41(14): 2691-2692, 2022 06 30.
Article in English | MEDLINE | ID: mdl-35322880
5.
Stat Med ; 41(6): 950-963, 2022 03 15.
Article in English | MEDLINE | ID: mdl-35084052

ABSTRACT

The win ratio composite endpoint, which organizes the components of the composite hierarchically, is becoming popular in late-stage clinical trials. The method involves comparing data in a pair-wise manner starting with the endpoint highest in priority (eg, cardiovascular death). If the comparison is a tie, the endpoint next highest in priority (eg, hospitalizations for heart failure) is compared, and so on. Its sample size is usually calculated through complex simulations because there does not exist in the literature a simple sample size formula. This article provides a formula that depends on the probability that a randomly selected patient from one group does better than a randomly selected patient from another group, and on the probability of a tie. We compare the published 95% confidence intervals, which require patient-level data, with that calculated from the formula, requiring only summary-level data, for 17 composite or single win ratio endpoints. The two sets of results are similar. Simulations show the sample size formula performs well. The formula provides important insights. It shows when adding an endpoint to the hierarchy can increase power even if the added endpoint has low power by itself. It provides relevant information to modify an on-going blinded trial if necessary. The formula allows a non-specialist to quickly determine the size of the trial with a win ratio endpoint whose use is expected to increase over time.


Subject(s)
Heart Failure , Research Design , Heart Failure/drug therapy , Hospitalization , Humans , Sample Size
6.
Ther Innov Regul Sci ; 54(3): 717-722, 2020 05.
Article in English | MEDLINE | ID: mdl-33301156

ABSTRACT

Although checklists and guidelines for reporting and interpretation of clinical trial results are of immense value there is still room for a biased presentation in journal publications. Two important sources of bias that remain are as follows: (1) The absence of a principle guiding the display of point estimates in abstracts. For example, bias arises, even for a primary endpoint, when the reported point estimate is preferentially selected and does not correspond to the prespecified method of analysis. The benefit of treatment on an endpoint is often communicated through point estimates, and as abstracts contain the main takeaways, establishing ground rules for what to include and what not to include is crucial. (2) A commingling in the body of the publication of results from α-controlled endpoints, non-α-controlled endpoints, and post hoc analyses. The total number of non-α-controlled and post hoc analyses are unknown. Blending a favored selection of these with α-controlled results provides opportunities to overstate or understate findings as desired. Publicly available results provide the grist for the changes proposed to improve reporting standards. Additional changes are recommended as well, including a threshold of significance more stringent than 0.05 for non-α-controlled analyses. For safety, the proposal is to display the data via the mean cumulative function graph for prespecified adverse events of interest. The bottom line is that more objective reporting can be achieved if journals establish standards for reporting of point estimates in abstracts and require a hierarchical display of results in the main body.


Subject(s)
Clinical Trials as Topic , Research Design , Information Dissemination
7.
Ther Innov Regul Sci ; : 2168479019879099, 2019 Oct 28.
Article in English | MEDLINE | ID: mdl-31658817

ABSTRACT

Although checklists and guidelines for reporting and interpretation of clinical trial results are of immense value there is still room for a biased presentation in journal publications. Two important sources of bias that remain are as follows: (1) The absence of a principle guiding the display of point estimates in abstracts. For example, bias arises, even for a primary endpoint, when the reported point estimate is preferentially selected and does not correspond to the prespecified method of analysis. The benefit of treatment on an endpoint is often communicated through point estimates, and as abstracts contain the main takeaways, establishing ground rules for what to include and what not to include is crucial. (2) A commingling in the body of the publication of results from α-controlled endpoints, non-α-controlled endpoints, and post hoc analyses. The total number of non-α-controlled and post hoc analyses are unknown. Blending a favored selection of these with α-controlled results provides opportunities to overstate or understate findings as desired. Publicly available results provide the grist for the changes proposed to improve reporting standards. Additional changes are recommended as well, including a threshold of significance more stringent than 0.05 for non-α-controlled analyses. For safety, the proposal is to display the data via the mean cumulative function graph for prespecified adverse events of interest. The bottom line is that more objective reporting can be achieved if journals establish standards for reporting of point estimates in abstracts and require a hierarchical display of results in the main body.

8.
Trials ; 18(1): 278, 2017 06 15.
Article in English | MEDLINE | ID: mdl-28619049

ABSTRACT

BACKGROUND: Current regulatory guidance and practice of non-inferiority trials are asymmetric in favor of the test treatment (Test) over the reference treatment (Control). These trials are designed to compare the relative efficacy of Test to Control by reference to a clinically important margin, M. MAIN TEXT: Non-inferiority trials allow for the conclusion of: (a) non-inferiority of Test to Control if Test is slightly worse than Control but by no more than M; and (b) superiority if Test is slightly better than Control even if it is by less than M. From Control's perspective, (b) should lead to a conclusion of non-inferiority of Control to Test. The logical interpretation ought to be that, while Test is statistically better, it is not clinically superior to Control (since Control should be able to claim non-inferiority to Test). This article makes a distinction between statistical and clinical significance, providing for symmetry in the interpretation of results. Statistical superiority and clinical superiority are achieved, respectively, when the null and the non-inferiority margins are exceeded. We discuss a similar modification to placebo-controlled trials. CONCLUSION: Rules for interpretation should not favor one treatment over another. Claims of statistical or clinical superiority should depend on whether or not the null margin or the clinically relevant margin is exceeded.


Subject(s)
Drug Approval , Drug Therapy/methods , Equivalence Trials as Topic , Research Design , Anti-Inflammatory Agents, Non-Steroidal/adverse effects , Cardiovascular Diseases/chemically induced , Celecoxib/adverse effects , Cyclooxygenase 2 Inhibitors/adverse effects , Data Interpretation, Statistical , Drug Approval/statistics & numerical data , Drug Therapy/statistics & numerical data , Humans , Ibuprofen/adverse effects , Models, Statistical , Naproxen/adverse effects , Research Design/statistics & numerical data , Risk Assessment , Risk Factors , Treatment Outcome
9.
Pharm Stat ; 16(2): 167-173, 2017 03.
Article in English | MEDLINE | ID: mdl-28133895

ABSTRACT

For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests.


Subject(s)
Clinical Trials as Topic/methods , Data Interpretation, Statistical , Endpoint Determination/methods , Research Design , Clinical Trials as Topic/ethics , Early Termination of Clinical Trials/ethics , Endpoint Determination/ethics , Humans , Models, Statistical , Risk , Time Factors
10.
J Biopharm Stat ; 27(1): 101-110, 2017.
Article in English | MEDLINE | ID: mdl-26891426

ABSTRACT

In clinical trials, some patient subgroups are likely to demonstrate larger effect sizes than other subgroups. For example, the effect size, or informally the benefit with treatment, is often greater in patients with a moderate condition of a disease than in those with a mild condition. A limitation of the usual method of analysis is that it does not incorporate this ordering of effect size by patient subgroup. We propose a test statistic which supplements the conventional test by including this information and simultaneously tests the null hypothesis in pre-specified subgroups and in the overall sample. It results in more power than the conventional test when the differences in effect sizes across subgroups are at least moderately large; otherwise it loses power. The method involves combining p-values from models fit to pre-specified subgroups and the overall sample in a manner that assigns greater weight to subgroups in which a larger effect size is expected. Results are presented for randomized trials with two and three subgroups.


Subject(s)
Data Interpretation, Statistical , Randomized Controlled Trials as Topic , Research Design , Humans , Sample Size
11.
Stat Methods Med Res ; 26(1): 64-74, 2017 Feb.
Article in English | MEDLINE | ID: mdl-24919832

ABSTRACT

The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.


Subject(s)
Clinical Trials as Topic/methods , Data Interpretation, Statistical , Research Design , Humans , Treatment Outcome
12.
Trials ; 17(1): 332, 2016 07 20.
Article in English | MEDLINE | ID: mdl-27439520

ABSTRACT

The period toward the end of patients' participation in late stage blinded clinical trials is highly resource intensive for the sponsor. Consider first a Phase 3 trial. If the trial is a success, the sponsor has to implement the next steps, which might be filing for approval of the drug with the US Food and Drug Administration (FDA). To shorten the time interval between trial completion and submission of the package to the FDA, sponsors front-load as much work as is possible at risk. The approach is efficient if the trial succeeds but is inefficient if it fails. For a failed trial, the sponsor is unlikely to proceed with the plan that assumed success. Phase 2 trials are also at risk of being inefficient. Many activities, such as planning for drug interaction studies, thorough QT studies, or site selection for Phase 3 trials, are set in motion prior to completion of the Phase 2 trial. The work going on in parallel is wasted if the trial fails. The proposal to improve the efficiency is to let an independent entity provide the sponsor critical information at an earlier time necessary to reevaluate activities ongoing in parallel and external to the trial.


Subject(s)
Clinical Trials, Phase II as Topic , Clinical Trials, Phase III as Topic , Humans , United States , United States Food and Drug Administration
14.
Stat Methods Med Res ; 25(4): 1381-92, 2016 08.
Article in English | MEDLINE | ID: mdl-23592715

ABSTRACT

Although there is considerable interest in adverse events observed in clinical trials, projecting adverse event incidence rates in an extended period can be of interest when the trial duration is limited compared to clinical practice. A naïve method for making projections might involve modeling the observed rates into the future for each adverse event. However, such an approach overlooks the information that can be borrowed across all the adverse event data. We propose a method that weights each projection using a shrinkage factor; the adverse event-specific shrinkage is a probability, based on empirical Bayes methodology, estimated from all the adverse event data, reflecting evidence in support of the null or non-null hypotheses. Also proposed is a technique to estimate the proportion of true nulls, called the common area under the density curves, which is a critical step in arriving at the shrinkage factor. The performance of the method is evaluated by projecting from interim data and then comparing the projected results with observed results. The method is illustrated on two data sets.


Subject(s)
Antineoplastic Agents/adverse effects , Bayes Theorem , Chickenpox Vaccine/adverse effects , Forecasting/methods , Measles-Mumps-Rubella Vaccine/adverse effects , Clinical Trials, Phase III as Topic , Humans , Incidence , Neoplasms/drug therapy , Vaccines, Combined/adverse effects
16.
Clin Trials ; 10(5): 744-53, 2013 Oct.
Article in English | MEDLINE | ID: mdl-24130201

ABSTRACT

BACKGROUND: With blinded data, several authors have concluded that there is a negligible chance of inferring a non-null treatment effect. The recent Food and Drug Administration (FDA) draft guidance document on adaptive trials, by encouraging blinded sample size reestimation, implies the same. PURPOSE: We derive methods to investigate whether the probability of inferring a treatment effect is much larger than previously thought, and whether that is of concern. METHODS: A statistic is developed that contributes to improving signal detection. Additionally, trials that are overpowered, for reasons external to powering the primary objective, further strengthen the chance of finding a signal. RESULTS: An example of data from a clinical trial shows how revealing a blinded analysis can be. The ability to infer a non-null effect while a blinded trial is ongoing is a serious matter. LIMITATIONS: The methods apply to superiority trials and are of limited use for non-inferiority or equivalence trials. CONCLUSION: It is important, therefore, that guidance documents include clear language to limit or prevent inference from blinded data to maintain trial integrity. Simple steps are proposed to make inference difficult.


Subject(s)
Data Interpretation, Statistical , Double-Blind Method , Randomized Controlled Trials as Topic/methods , Humans , Sample Size , United States , United States Food and Drug Administration
17.
Pharm Stat ; 12(5): 282-90, 2013.
Article in English | MEDLINE | ID: mdl-23922313

ABSTRACT

Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials.


Subject(s)
Models, Statistical , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/statistics & numerical data , Humans , Linear Models
18.
Stat Med ; 30(24): 2881-9, 2011 Oct 30.
Article in English | MEDLINE | ID: mdl-21905064

ABSTRACT

Stratification is common in clinical trials because it can reduce the variance of the estimated treatment effect. The traditional demonstration of variance reduction relies on the assumption of stratum sizes being fixed quantities. However, in practice, to speed up enrollment, and to obtain a study population with a similar distribution as the overall population, the stratum sizes are allowed to vary. Under the condition that the total sample size is fixed and that the stratum sizes have a multinomial distribution, the criterion changes for achieving a reduction in variance. The relationship between the stratified and unstratified variances is established and shown to be approximately the same for prestratified and post-stratified trials. It is demonstrated why stratification may actually increase the variance compared with no stratification even when the mean square error is reduced on account of stratification. Data from a real clinical trial will be used for illustration. The benefit attributed to stratification needs to be re-examined in light of the findings presented, particularly given its widespread use.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , 5-alpha Reductase Inhibitors/administration & dosage , Analysis of Variance , Biostatistics , Finasteride/administration & dosage , Humans , Male , Models, Statistical , Multicenter Studies as Topic/statistics & numerical data , Prostatic Hyperplasia/drug therapy , Randomized Controlled Trials as Topic/statistics & numerical data , Sample Size
19.
Pharm Stat ; 9(2): 113-24, 2010.
Article in English | MEDLINE | ID: mdl-19507135

ABSTRACT

In the planning of randomized survival trials, the role of follow-up time of trial participants introduces a level of complexity not encountered in non-survival trials. Of the two commonly used survival designs, one design fixes the follow-up time whereas the other allows it to vary. When the follow-up time is fixed the number of events varies. Conversely, when the number of events is fixed, the follow-up time varies. These two designs influence test statistics in ways that have not been fully explored resulting in a misunderstanding of the design-test statistic relationship. We use examples from the literature to strengthen the understanding of this relationship. Group sequential trials are briefly discussed. When the number of events is fixed, we demonstrate why a two-sample risk difference test statistic reduces to a one-sample test statistic which is nearly equal to the risk ratio test statistic. Some aspects of fixed event designs that need further consideration are also discussed.


Subject(s)
Models, Statistical , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/statistics & numerical data , Survival Analysis , Humans , Odds Ratio , Randomized Controlled Trials as Topic/methods
20.
Stat Med ; 28(1): 24-38, 2009 Jan 15.
Article in English | MEDLINE | ID: mdl-18837073

ABSTRACT

We extend a method we had previously described (Statist. Med. 2005) to estimate the within-group variance of a continuous endpoint without breaking the blind in a randomized clinical trial. Specifically, we: (a) explain how the method may be used for a wider set of designs than we had previously indicated; (b) obtain a within-group, covariate-adjusted, blinded variance estimator; (c) illustrate use of the method for sample size re-estimation; and (d) describe a procedure to determine whether or not the blinded variance estimator works well not just on average but for the data set at hand. The proposed method is simple to use and makes no additional assumptions than is made for unblinded analysis. Simulations show that for realistic sample sizes there is virtually no inflation in the Type I error rate. When weighing the burden imposed by interim unblinded re-estimation with the loss in precision with blinded re-estimation, it may be advantageous for some trials to use the blinded method.


Subject(s)
Models, Statistical , Randomized Controlled Trials as Topic/statistics & numerical data , Sample Size , Analysis of Variance , Double-Blind Method , Humans , Single-Blind Method
SELECTION OF CITATIONS
SEARCH DETAIL
...