Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 80
Filter
1.
Stat Med ; 2024 Jun 16.
Article in English | MEDLINE | ID: mdl-38881219

ABSTRACT

An assurance calculation is a Bayesian alternative to a power calculation. One may be performed to aid the planning of a clinical trial, specifically setting the sample size or to support decisions about whether or not to perform a study. Immuno-oncology is a rapidly evolving area in the development of anticancer drugs. A common phenomenon that arises in trials of such drugs is one of delayed treatment effects, that is, there is a delay in the separation of the survival curves. To calculate assurance for a trial in which a delayed treatment effect is likely to be present, uncertainty about key parameters needs to be considered. If uncertainty is not considered, the number of patients recruited may not be enough to ensure we have adequate statistical power to detect a clinically relevant treatment effect and the risk of an unsuccessful trial is increased. We present a new elicitation technique for when a delayed treatment effect is likely and show how to compute assurance using these elicited prior distributions. We provide an example to illustrate how this can be used in practice and develop open-source software to implement our methods. Our methodology has the potential to improve the success rate and efficiency of Phase III trials in immuno-oncology and for other treatments where a delayed treatment effect is expected to occur.

2.
Lancet ; 402 Suppl 1: S22, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37997062

ABSTRACT

BACKGROUND: Asthma exacerbations peak in school-aged children after the return to school in September. Previous studies have shown a decline in collections of asthma prescriptions during August. The PLEASANT trial demonstrated that sending a reminder letter to parents increased prescription uptake; reduced unscheduled care, and was cost saving to the health service. We aimed to assess whether informing general practitioner (GP) practices about the PLEASANT trial and its results could lead to its implementation in routine practice. METHODS: The trial to assess implementation of new research in a primary care setting (TRAINS) was a pragmatic cluster-randomised (1:1) trial conducted in England involving GP practices contributing to the Clinical Practice Research Datalink (CPRD). The intervention was a letter informing the GP practice of the PLEASANT trial results with recommendations for implementation. GP practices in the control group continued with usual care without receiving any letters about PLEASANT trial. The intervention was distributed via CPRD by both mail and email in June 2021. The trial received both University of Sheffield Ethics approval and Independent Scientific Advisory Committee (ISAC) approval. The primary outcome was the proportion of children with asthma (aged 4-15 years) who had a prescription for a preventer between Aug 1 and Sept 30, 2021. This trial is registered with ClinicalTrials.gov, NCT05226091. FINDINGS: A total of 1326 GP practices, including 90 583 children with asthma, were included in the study. These practices were randomly allocated to the intervention group (664 practices, 44 708 children) or the control group (662 practices, 45 875 children). In assessing the impact of the intervention on the proportion of children collecting a preventer prescription, 15 716 (35·3%) of 44 708 children from the intervention group and 16 001 (35·1%) of 45 559 children from the control group picked up a prescription. There was no statistically significant difference observed (odds ratio [OR] 1·01, 95% CI 0·97-1·05), indicating that the intervention had no effect. INTERPRETATION: The study findings suggest that passive intervention of providing a letter to GPs did not achieve the intended outcomes. To bridge the gap between evidence and practice, alternative, more proactive strategies could be explored to address the identified issues. FUNDING: Jazan University.


Subject(s)
Asthma , General Practice , General Practitioners , Child , Humans , Asthma/drug therapy , Cost-Benefit Analysis , Prescriptions
3.
Health Technol Assess ; 27(20): 1-58, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37982521

ABSTRACT

Background: Randomised controlled trials are designed to assess the superiority, equivalence or non-inferiority of a new health technology, but which trial design should be used is not always obvious in practice. In particular, when using equivalence or non-inferiority designs, multiple outcomes of interest may be important for the success of a trial, despite the fact that usually only a single primary outcome is used to design the trial. Benefit-risk methods are used in the regulatory clinical trial setting to assess multiple outcomes and consider the trade-off of the benefits against the risks, but are not regularly implemented in publicly funded trials. Objectives: The aim of the project is to aid the design of clinical trials with multiple outcomes of interest by defining when each trial design is appropriate to use and identifying when to use benefit-risk methods to assess outcome trade-offs (qualitatively or quantitatively) in a publicly funded trial setting. Methods: A range of methods was used to elicit expert opinion to answer the project objectives, including a web-based survey of relevant researchers, a rapid review of current literature and a 2-day consensus workshop of experts (in 2019). Results: We created a list of 19 factors to aid researchers in selecting the most appropriate trial design, containing the following overarching sections: population, intervention, comparator, outcomes, feasibility and perspectives. Six key reasons that indicate a benefit-risk method should be considered within a trial were identified: (1) when the success of the trial depends on more than one outcome; (2) when important outcomes within the trial are in competing directions (i.e. a health technology is better for one outcome, but worse for another); (3) to allow patient preferences to be included and directly influence trial results; (4) to provide transparency on subjective recommendations from a trial; (5) to provide consistency in the approach to presenting results from a trial; and (6) to synthesise multiple outcomes into a single metric. Further information was provided to support the use of benefit-risk methods in appropriate circumstances, including the following: methods identified from the review were collated into different groupings and described to aid the selection of a method; potential implementation of methods throughout the trial process were provided and discussed (with examples); and general considerations were described for those using benefit-risk methods. Finally, a checklist of five pieces of information that should be present when reporting benefit-risk methods was defined, with two additional items specifically for reporting the results. Conclusions: These recommendations will assist research teams in selecting which trial design to use and deciding whether or not a benefit-risk method could be included to ensure research questions are answered appropriately. Additional information is provided to support consistent use and clear reporting of benefit-risk methods in the future. The recommendations can also be used by funding committees to confirm that appropriate considerations of the trial design have been made. Limitations: This research was limited in scope and should be considered in conjunction with other trial design methodologies to assess appropriateness. In addition, further research is needed to provide concrete information about which benefit-risk methods are best to use in publicly funded trials, along with recommendations that are specific to each method. Study registration: The rapid review is registered as PROSPERO CRD42019144882. Funding: Funded by the Medical Research Council UK and the National Institute for Health and Care Research as part of the Medical Research Council-National Institute for Health and Care Research Methodology Research programme.


Randomised controlled trials are considered the best way to gather evidence about potential NHS treatments. They can be designed from different perspectives depending whether the aim is to show that a new treatment is better than, equal to or no worse than the current best available treatment. The selection of this design relates to the single most important outcome; however, often multiple outcomes can be affected by a treatment. For example, a new treatment may improve disease management but increase side effects. Patients want a treatment to work but not at the price of poor quality of life; therefore, a trade-off must be made, and the recommended treatment depends on this trade-off. Benefit­risk methods can assess the trade-off between multiple outcomes and can include patient preference. These methods could improve the way that decisions are made about treatments in the NHS, but there is currently limited research about the use of these methods in publicly funded trials. The aim of this report is to improve the design of clinical trials by helping researchers to select the most appropriate trial design and to decide when to include a benefit­risk method. The recommendations were created using the opinions of experts within the field and consisted of a survey, review of the literature and a workshop. The project created a list of 19 factors that can assist researchers to select the most appropriate trial design. Furthermore, six key areas were identified in which researchers may consider including a benefit­risk method within a trial. Finally, if a benefit­risk assessment is being used, a checklist of items has been created that identifies the information important to include in reports. This report is, however, limited in its applicability and further research should extend this work, as well as provide more detail on individual methods that are available.


Subject(s)
Patient Preference , Research Design , Humans , Randomized Controlled Trials as Topic
5.
J Clin Epidemiol ; 158: 149-165, 2023 06.
Article in English | MEDLINE | ID: mdl-37100738

ABSTRACT

Randomized controlled trials remain the reference standard for healthcare research on effects of interventions, and the need to report both benefits and harms is essential. The Consolidated Standards of Reporting Trials (the main CONSORT) statement includes one item on reporting harms (i.e., all important harms or unintended effects in each group). In 2004, the CONSORT group developed the CONSORT Harms extension; however, it has not been consistently applied and needs to be updated. Here, we describe CONSORT Harms 2022, which replaces the CONSORT Harms 2004 checklist, and shows how CONSORT Harms 2022 items could be incorporated into the main CONSORT checklist. Thirteen items from the main CONSORT were modified to improve harms reporting. Three new items were added. In this article, we describe CONSORT Harms 2022 and how it was integrated into the main CONSORT checklist and elaborate on each item relevant to complete reporting of harms in randomized controlled trials. Until future work from the CONSORT group produces an updated checklist, authors, journal reviewers, and editors of randomized controlled trials should use the integrated checklist presented in this paper.


Subject(s)
Checklist , Publishing , Humans , Randomized Controlled Trials as Topic , Reference Standards , Research Report , Research Design
6.
Trials ; 24(1): 215, 2023 Mar 22.
Article in English | MEDLINE | ID: mdl-36949524

ABSTRACT

BACKGROUND: Adaptive clinical trials may use conditional power (CP) to make decisions at interim analyses, requiring assumptions about the treatment effect for remaining patients. It is critical that these assumptions are understood by those using CP in decision-making, as well as timings of these decisions. METHODS: Data for 21 outcomes from 14 published clinical trials were made available for re-analysis. CP curves for accruing outcome information were calculated using and compared with a pre-specified objective criteria for original and transformed versions of the trial data using four future treatment effect assumptions: (i) observed current trend, (ii) hypothesised effect, (iii) 80% optimistic confidence limit, (iv) 90% optimistic confidence limit. RESULTS: The hypothesised effect assumption met objective criteria when the true effect was close to that planned, but not when smaller than planned. The opposite was seen using the current trend assumption. Optimistic confidence limit assumptions appeared to offer a compromise between the two, performing well against objective criteria when the end observed effect was as planned or smaller. CONCLUSION: The current trend assumption could be the preferable assumption when there is a wish to stop early for futility. Interim analyses could be undertaken as early as 30% of patients have data available. Optimistic confidence limit assumptions should be considered when using CP to make trial decisions, although later interim timings should be considered where logistically feasible.


Subject(s)
Medical Futility , Research Design , Humans , Retrospective Studies , Sample Size
7.
Trials ; 24(1): 71, 2023 Jan 31.
Article in English | MEDLINE | ID: mdl-36721215

ABSTRACT

BACKGROUND: Existing guidelines recommend statisticians remain blinded to treatment allocation prior to the final analysis and that any interim analyses should be conducted by a separate team from the one undertaking the final analysis. However, there remains substantial variation in practice between UK Clinical Trials Units (CTUs) when it comes to blinding statisticians. Therefore, the aim of this study was to develop guidance to advise CTUs on a risk-proportionate approach to blinding statisticians within clinical trials. METHODS: This study employed a mixed methods approach involving three stages: (I) a quantitative study using a cohort of 200 studies (from a major UK funder published between 2016 and 2020) to assess the impact of blinding statisticians on the proportion of trials reporting a statistically significant finding for the primary outcome(s); (II) a qualitative study using focus groups to determine the perspectives of key stakeholders on the practice of blinding trial statisticians; and (III) combining the results of stages I and II, along with a stakeholder meeting, to develop guidance for UK CTUs. RESULTS: After screening abstracts, 179 trials were included for review. The results of the primary analysis showed no evidence that involvement of an unblinded trial statistician was associated with the likelihood of statistically significant findings being reported, odds ratio (OR) 1.02 (95% confidence interval (CI) 0.49 to 2.13). Six focus groups were conducted, with 37 participants. The triangulation between stages I and II resulted in developing 40 provisional statements. These were rated independently by the stakeholder group prior to the meeting. Ten statements reached agreement with no agreement on 30 statements. At the meeting, various factors were identified that could influence the decision of blinding the statistician, including timing, study design, types of intervention and practicalities. Guidance including 21 recommendations/considerations was developed alongside a Risk Assessment Tool to provide CTUs with a framework for assessing the risks associated with blinding/not blinding statisticians and for identifying appropriate mitigation strategies. CONCLUSIONS: This is the first study to develop a guidance document to enhance the understanding of blinding statisticians and to provide a framework for the decision-making process. The key finding was that the decision to blind statisticians should be based on the benefits and risks associated with a particular trial.


Subject(s)
Research Design , Humans , Focus Groups , Odds Ratio , Probability , Qualitative Research , Clinical Trials as Topic
8.
Trials ; 23(1): 947, 2022 Nov 17.
Article in English | MEDLINE | ID: mdl-36397087

ABSTRACT

BACKGROUND: There is a marked increase in unscheduled care visits in school-aged children with asthma after returning to school in September. This is potentially associated with children not taking their asthma preventer medication during the school summer holidays. A cluster randomised controlled trial (PLEASANT) was undertaken with 1279 school-age children in 141 general practices (71 on intervention and 70 on control) in England and Wales. It found that a simple letter sent from the family doctor during the school holidays to a parent with a child with asthma, informing them of the importance of taking asthma preventer medication during the summer relatively increased prescriptions by 30% in August and reduced medical contacts in the period September to December. Also, it is estimated there was a cost-saving of £36.07 per patient over the year. We aim to conduct a randomised trial to assess if informing GP practices of an evidence-based intervention improves the implementation of that intervention. METHODS/DESIGN: The TRAINS study-TRial to Assess Implementation of New research in a primary care Setting-is a pragmatic cluster randomised implementation trial using routine data. A total of 1389 general practitioner (GP) practices in England will be included into the trial; 694 GP practices will be randomised to the intervention group and 695 control group of usual care. The Clinical Practice Research Datalink (CPRD) will send the intervention and obtain all data for the study, including prescription and primary care contacts data. The intervention will be sent in June 2021 by postal and email to the asthma lead and/or practice manager. The intervention is a letter to GPs informing them of the PLEASANT study findings with recommendations. It will come with an information leaflet about PLEASANT and a suggested reminder letter and SMS text template. DISCUSSION: The trial will assess if informing GP practices of the PLEASANT trial results will increase prescription uptake before the start of the school year. The hope is that the intervention will increase the implementation of PLEASANT work and then increase prescription uptake during the summer holiday prior to the start of school. TRIAL REGISTRATION: ClinicalTrials.gov ID: NCT05226091.


Subject(s)
Asthma , General Practice , General Practitioners , Child , Humans , Asthma/diagnosis , Asthma/drug therapy , Prescriptions , Primary Health Care/methods , Randomized Controlled Trials as Topic
9.
Health Technol Assess ; 26(39): 1-100, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36259684

ABSTRACT

BACKGROUND: The mainstay of treatment for diabetic peripheral neuropathic pain is pharmacotherapy, but the current National Institute for Health and Care Excellence guideline is not based on robust evidence, as the treatments and their combinations have not been directly compared. OBJECTIVES: To determine the most clinically beneficial, cost-effective and tolerated treatment pathway for diabetic peripheral neuropathic pain. DESIGN: A randomised crossover trial with health economic analysis. SETTING: Twenty-one secondary care centres in the UK. PARTICIPANTS: Adults with diabetic peripheral neuropathic pain with a 7-day average self-rated pain score of ≥ 4 points (Numeric Rating Scale 0-10). INTERVENTIONS: Participants were randomised to three commonly used treatment pathways: (1) amitriptyline supplemented with pregabalin, (2) duloxetine supplemented with pregabalin and (3) pregabalin supplemented with amitriptyline. Participants and research teams were blinded to treatment allocation, using over-encapsulated capsules and matching placebos. Site pharmacists were unblinded. OUTCOMES: The primary outcome was the difference in 7-day average 24-hour Numeric Rating Scale score between pathways, measured during the final week of each pathway. Secondary end points included 7-day average daily Numeric Rating Scale pain score at week 6 between monotherapies, quality of life (Short Form questionnaire-36 items), Hospital Anxiety and Depression Scale score, the proportion of patients achieving 30% and 50% pain reduction, Brief Pain Inventory - Modified Short Form items scores, Insomnia Severity Index score, Neuropathic Pain Symptom Inventory score, tolerability (scale 0-10), Patient Global Impression of Change score at week 16 and patients' preferred treatment pathway at week 50. Adverse events and serious adverse events were recorded. A within-trial cost-utility analysis was carried out to compare treatment pathways using incremental costs per quality-adjusted life-years from an NHS and social care perspective. RESULTS: A total of 140 participants were randomised from 13 UK centres, 130 of whom were included in the analyses. Pain score at week 16 was similar between the arms, with a mean difference of -0.1 points (98.3% confidence interval -0.5 to 0.3 points) for duloxetine supplemented with pregabalin compared with amitriptyline supplemented with pregabalin, a mean difference of -0.1 points (98.3% confidence interval -0.5 to 0.3 points) for pregabalin supplemented with amitriptyline compared with amitriptyline supplemented with pregabalin and a mean difference of 0.0 points (98.3% confidence interval -0.4 to 0.4 points) for pregabalin supplemented with amitriptyline compared with duloxetine supplemented with pregabalin. Results for tolerability, discontinuation and quality of life were similar. The adverse events were predictable for each drug. Combination therapy (weeks 6-16) was associated with a further reduction in Numeric Rating Scale pain score (mean 1.0 points, 98.3% confidence interval 0.6 to 1.3 points) compared with those who remained on monotherapy (mean 0.2 points, 98.3% confidence interval -0.1 to 0.5 points). The pregabalin supplemented with amitriptyline pathway had the fewest monotherapy discontinuations due to treatment-emergent adverse events and was most commonly preferred (most commonly preferred by participants: amitriptyline supplemented with pregabalin, 24%; duloxetine supplemented with pregabalin, 33%; pregabalin supplemented with amitriptyline, 43%; p = 0.26). No single pathway was superior in cost-effectiveness. The incremental gains in quality-adjusted life-years were small for each pathway comparison [amitriptyline supplemented with pregabalin compared with duloxetine supplemented with pregabalin -0.002 (95% confidence interval -0.011 to 0.007) quality-adjusted life-years, amitriptyline supplemented with pregabalin compared with pregabalin supplemented with amitriptyline -0.006 (95% confidence interval -0.002 to 0.014) quality-adjusted life-years and duloxetine supplemented with pregabalin compared with pregabalin supplemented with amitriptyline 0.007 (95% confidence interval 0.0002 to 0.015) quality-adjusted life-years] and incremental costs over 16 weeks were similar [amitriptyline supplemented with pregabalin compared with duloxetine supplemented with pregabalin -£113 (95% confidence interval -£381 to £90), amitriptyline supplemented with pregabalin compared with pregabalin supplemented with amitriptyline £155 (95% confidence interval -£37 to £625) and duloxetine supplemented with pregabalin compared with pregabalin supplemented with amitriptyline £141 (95% confidence interval -£13 to £398)]. LIMITATIONS: Although there was no placebo arm, there is strong evidence for the use of each study medication from randomised placebo-controlled trials. The addition of a placebo arm would have increased the duration of this already long and demanding trial and it was not felt to be ethically justifiable. FUTURE WORK: Future research should explore (1) variations in diabetic peripheral neuropathic pain management at the practice level, (2) how OPTION-DM (Optimal Pathway for TreatIng neurOpathic paiN in Diabetes Mellitus) trial findings can be best implemented, (3) why some patients respond to a particular drug and others do not and (4) what options there are for further treatments for those patients on combination treatment with inadequate pain relief. CONCLUSIONS: The three treatment pathways appear to give comparable patient outcomes at similar costs, suggesting that the optimal treatment may depend on patients' preference in terms of side effects. TRIAL REGISTRATION: The trial is registered as ISRCTN17545443 and EudraCT 2016-003146-89. FUNDING: This project was funded by the National Institute for Health and Care Research (NIHR) Health Technology Assessment programme, and will be published in full in Health Technology Assessment; Vol. 26, No. 39. See the NIHR Journals Library website for further project information.


The number of people with diabetes is growing rapidly in the UK and is predicted to rise to over 5 million by 2025. Diabetes causes nerve damage that can lead to severe painful symptoms in the feet, legs and hands. One-quarter of all people with diabetes experience these symptoms, known as 'painful diabetic neuropathy'. Current individual medications provide only partial benefit, and in only around half of patients. The individual drugs, and their combinations, have not been compared directly against each other to see which is best. We conducted a study to see which treatment pathway would be best for patients with painful diabetic neuropathy. The study included three treatment pathways using combinations of amitriptyline, duloxetine and pregabalin. Patients received all three treatment pathways (i.e. amitriptyline treatment for 6 weeks and pregabalin added if needed for a further 10 weeks, duloxetine treatment for 6 weeks and pregabalin added if needed for a further 10 weeks and pregabalin treatment for 6 weeks and amitriptyline added if needed for a further 10 weeks); however, the order of the treatment pathways was decided at random. We compared the level of pain that participants experienced in each treatment pathway to see which worked best. On average, people said that their pain was similar after each of the three treatments and their combinations. However, two treatments in combination helped some patients with additional pain relief if they only partially responded to one. People also reported improved quality of life and sleep with the treatments, but these were similar for all the treatments. In the health economic analysis, the value for money and quality of life were similar for each pathway, and this resulted in uncertainty in the cost-effectiveness conclusions, with no one pathway being more cost-effective than the others. The treatments had different side effects, however; pregabalin appeared to make more people feel dizzy, duloxetine made more people nauseous and amitriptyline resulted in more people having a dry mouth. The pregabalin supplemented by amitriptyline pathway had the smallest number of treatment discontinuations due to side effects and may be the safest for patients.


Subject(s)
Diabetes Mellitus , Neuralgia , Adult , Humans , Pregabalin/therapeutic use , Duloxetine Hydrochloride/therapeutic use , Amitriptyline/adverse effects , Quality of Life , Neuralgia/drug therapy , Neuralgia/chemically induced , Cost-Benefit Analysis
10.
BMC Med Res Methodol ; 22(1): 204, 2022 07 25.
Article in English | MEDLINE | ID: mdl-35879673

ABSTRACT

When designing a noninferiority (NI) study one of the most important steps is to set the noninferiority (NI) limit. The NI limit is an acceptable loss of efficacy for a new investigative treatment compared to an active control treatment - often standard care. The limit should be a value so small that the loss efficacy is clinically zero. An approach to the setting of a noninferiority limit such that an effect over placebo can be shown through an indirect comparison to placebo-controlled trials where the active control treatment was compared to placebo. In this context, the setting of the NI limit depends on three assumptions: assay sensitivity, bias minimisation, and the constancy assumption. The last assumption of constancy assumes the effect of the active control over placebo is constant. This paper aims to assess the constancy assumption in placebo-controlled trials. METHODS: 236 Cochrane reviews of placebo-controlled trials published in 2015-2016 were collected and used to assess the relation between the placebo, active treatment, and the standardised treatment different (SMD) with the time (year of publication). RESULTS: The analysis showed that both the size of the study and the treatment effect were associated with year of publication. The three main variables that affect the estimate of any future trial are the estimate from the meta-analysis of previous trials prior to the trial, the year difference in the meta-analysis, and the year of the trial conduction. The regression analysis showed that an increase of one unit in the point estimate of the historical meta-analysis would lead to an increase in the predicted estimate of future trial on the SMD scale by 0.88. This result suggests the final trial results are 12% smaller than that from the meta-analysis of trials until that point. CONCLUSION: The result of this study indicates that assuming constancy of the treatment difference between the active control and placebo can be questioned. It is therefore important to consider the effect of time in estimating the treatment response if indirect comparisons are being used as the basis of a NI limit.


Subject(s)
Bias , Humans
11.
Pharm Stat ; 21(5): 1109-1110, 2022 09.
Article in English | MEDLINE | ID: mdl-35535737

ABSTRACT

In 2016 we published three articles in Pharmaceutical Statistics that gave a practical guide to sample size calculations. In each of the articles there were instructions on how to obtain the App SampSize. This short communication updates these instructions and highlights the updates and added functionality to the App.


Subject(s)
Mobile Applications , Humans , Pharmaceutical Preparations , Sample Size
12.
Pharm Stat ; 21(2): 460-475, 2022 03.
Article in English | MEDLINE | ID: mdl-34860471

ABSTRACT

When designing a clinical trial, one key aspect of the design is the sample size calculation. The sample size calculation tends to rely on a target or expected difference. The expected difference can be based on the observed data from previous studies, which results in bias. It has been reported that large treatment effects observed in trials are often not replicated in subsequent trials. If these values are used to design subsequent studies, the sample sizes may be biased which results in an unethical study. Regression to the mean (RTM) is one explanation for this. If only health technologies which meet a particular continuation criterion (such as p<0.05 in the first study) are progressed to a second confirmatory trial, it is highly likely that the observed effect in the second trial will be lower than that observed in the first trial. It will be shown how when moving from one trial to the next, a truncated normal distribution is inherently imposed on the first study. This results in a lower observed effect size in the second trial. A simple adjustment method is proposed based on the mathematical properties of the truncated normal distribution. This adjustment method was confirmed using simulations in R and compared with other previous adjustments. The method can be applied to the observed effect in a trial, which is being used in the design of a second confirmatory trial, resulting in a more stable estimate for the 'true' treatment effect. The adjustment accounts for the bias in the primary and secondary endpoints in the first trial with the bias being affected by the power of that study. Tables of results have been provided to aid implementation, along with a worked example. In summary, there is a bias introduced when the point estimate from one trial is used to assist the design of a second trial. It is recommended that any observed point estimates be used with caution and the adjustment method developed in this article be implemented to significantly reduce this bias.


Subject(s)
Research Design , Bias , Causality , Humans , Normal Distribution , Sample Size
13.
Stat Methods Med Res ; 30(11): 2459-2470, 2021 11.
Article in English | MEDLINE | ID: mdl-34477455

ABSTRACT

Sample size calculations for cluster-randomised trials require inclusion of an inflation factor taking into account the intra-cluster correlation coefficient. Often, estimates of the intra-cluster correlation coefficient are taken from pilot trials, which are known to have uncertainty about their estimation. Given that the value of the intra-cluster correlation coefficient has a considerable influence on the calculated sample size for a main trial, the uncertainty in the estimate can have a large impact on the ultimate sample size and consequently, the power of a main trial. As such, it is important to account for the uncertainty in the estimate of the intra-cluster correlation coefficient. While a commonly adopted approach is to utilise the upper confidence limit in the sample size calculation, this is a largely inefficient method which can result in overpowered main trials. In this paper, we present a method of estimating the sample size for a main cluster-randomised trial with a continuous outcome, using numerical methods to account for the uncertainty in the intra-cluster correlation coefficient estimate. Despite limitations with this initial study, the findings and recommendations in this paper can help to improve sample size estimations for cluster randomised controlled trials by accounting for uncertainty in the estimate of the intra-cluster correlation coefficient. We recommend this approach be applied to all trials where there is uncertainty in the intra-cluster correlation coefficient estimate, in conjunction with additional sources of information to guide the estimation of the intra-cluster correlation coefficient.


Subject(s)
Research Design , Cluster Analysis , Sample Size , Uncertainty
14.
Trials ; 21(1): 1000, 2020 Dec 04.
Article in English | MEDLINE | ID: mdl-33276810

ABSTRACT

INTRODUCTION: Sample size calculations require assumptions regarding treatment response and variability. Incorrect assumptions can result in under- or overpowered trials, posing ethical concerns. Sample size re-estimation (SSR) methods investigate the validity of these assumptions and increase the sample size if necessary. The "promising zone" (Mehta and Pocock, Stat Med 30:3267-3284, 2011) concept is appealing to researchers for its design simplicity. However, it is still relatively new in the application and has been a source of controversy. OBJECTIVES: This research aims to synthesise current approaches and practical implementation of the promising zone design. METHODS: This systematic review comprehensively identifies the reporting of methodological research and of clinical trials using promising zone. Databases were searched according to a pre-specified search strategy, and pearl growing techniques implemented. RESULTS: The combined search methods resulted in 270 unique records identified; 171 were included in the review, of which 30 were trials. The median time to the interim analysis was 60% of the original target sample size (IQR 41-73%). Of the 15 completed trials, 7 increased their sample size. Only 21 studies reported the maximum sample size that would be considered, for which the median increase was 50% (IQR 35-100%). CONCLUSIONS: Promising zone is being implemented in a range of trials worldwide, albeit in low numbers. Identifying trials using promising zone was difficult due to the lack of reporting of SSR methodology. Even when SSR methodology was reported, some had key interim analysis details missing, and only eight papers provided promising zone ranges.


Subject(s)
Research Design , Humans , Sample Size
15.
Trials ; 21(1): 528, 2020 Jun 17.
Article in English | MEDLINE | ID: mdl-32546273

ABSTRACT

Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits. In order to encourage its wide dissemination this article is freely accessible on the BMJ and Trials journal websites."To maximise the benefit to society, you need to not just do research but do it well" Douglas G Altman.


Subject(s)
Checklist/standards , Consensus , Publishing/standards , Randomized Controlled Trials as Topic/standards , Research Design/standards , Delphi Technique , Guidelines as Topic , Humans , Periodicals as Topic , Quality Control , Reproducibility of Results
16.
BMJ ; 369: m115, 2020 06 17.
Article in English | MEDLINE | ID: mdl-32554564

ABSTRACT

Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.


Subject(s)
Checklist , Consensus , Publishing/standards , Randomized Controlled Trials as Topic/standards , Research Design/standards , Checklist/standards , Delphi Technique , Guidelines as Topic , Humans , Periodicals as Topic , Quality Control , Reproducibility of Results
17.
Health Technol Assess ; 23(60): 1-88, 2019 10.
Article in English | MEDLINE | ID: mdl-31661431

ABSTRACT

BACKGROUND: The randomised controlled trial is widely considered to be the gold standard study for comparing the effectiveness of health interventions. Central to its design is a calculation of the number of participants needed (the sample size) for the trial. The sample size is typically calculated by specifying the magnitude of the difference in the primary outcome between the intervention effects for the population of interest. This difference is called the 'target difference' and should be appropriate for the principal estimand of interest and determined by the primary aim of the study. The target difference between treatments should be considered realistic and/or important by one or more key stakeholder groups. OBJECTIVE: The objective of the report is to provide practical help on the choice of target difference used in the sample size calculation for a randomised controlled trial for researchers and funder representatives. METHODS: The Difference ELicitation in TriAls2 (DELTA2) recommendations and advice were developed through a five-stage process, which included two literature reviews of existing funder guidance and recent methodological literature; a Delphi process to engage with a wider group of stakeholders; a 2-day workshop; and finalising the core document. RESULTS: Advice is provided for definitive trials (Phase III/IV studies). Methods for choosing the target difference are reviewed. To aid those new to the topic, and to encourage better practice, 10 recommendations are made regarding choosing the target difference and undertaking a sample size calculation. Recommended reporting items for trial proposal, protocols and results papers under the conventional approach are also provided. Case studies reflecting different trial designs and covering different conditions are provided. Alternative trial designs and methods for choosing the sample size are also briefly considered. CONCLUSIONS: Choosing an appropriate sample size is crucial if a study is to inform clinical practice. The number of patients recruited into the trial needs to be sufficient to answer the objectives; however, the number should not be higher than necessary to avoid unnecessary burden on patients and wasting precious resources. The choice of the target difference is a key part of this process under the conventional approach to sample size calculations. This document provides advice and recommendations to improve practice and reporting regarding this aspect of trial design. Future work could extend the work to address other less common approaches to the sample size calculations, particularly in terms of appropriate reporting items. FUNDING: Funded by the Medical Research Council (MRC) UK and the National Institute for Health Research as part of the MRC-National Institute for Health Research Methodology Research programme.


This Difference ELicitation in TriAls2 (DELTA2) advice and recommendations document aims to help researchers choose the 'target difference' in a type of research study called a randomised controlled trial. The number of people needed to be involved in a study ­ the sample size ­ is usually based on a calculation aimed to ensure that the difference in benefit between treatments is likely to be detected. The calculation also accounts for the risk of a false-positive finding. No more patients than necessary should be involved. Choosing a 'target difference' is an important step in calculating the sample size. The target difference is defined as the amount of difference in the participants' response to the treatments that we wish to detect. It is probably the most important piece of information used in the sample size calculation. How we decide what the target difference should be depends on various factors. One key decision to make is how we should measure the benefits that treatments offer. For example, if we are evaluating a treatment for high blood pressure, the obvious thing to focus on would be blood pressure. We could then proceed to consider what an important difference in blood pressure between treatments would be, based on experts' views or evidence from previous research studies. This document seeks to provide assistance to researchers on how to choose the target difference when designing a trial. It also provides advice to help them clearly present what was done and why, when writing up the study proposal or reporting the study's findings. The document is also intended to be read by those who decide whether or not a proposed study should be funded. Clarifying a study's aim and getting a sensible sample size is important. It can affect not only those involved in the study, but also future patients who will receive treatment.


Subject(s)
Randomized Controlled Trials as Topic , Sample Size , Biomedical Research , Clinical Trials, Phase III as Topic , Clinical Trials, Phase IV as Topic , Delphi Technique , Education , Humans
19.
Trials ; 20(1): 493, 2019 Aug 09.
Article in English | MEDLINE | ID: mdl-31399148

ABSTRACT

BACKGROUND: With millions of pounds spent annually on medical research in the UK, it is important that studies are spending funds wisely. Internal pilots offer the chance to stop a trial early if it becomes apparent that the study will not be able to recruit enough patients to show whether an intervention is clinically effective. This study aims to assess the use of internal pilots in individually randomised controlled trials funded by the Health Technology Assessment (HTA) programme and to summarise the progression criteria chosen in these trials. METHODS: Studies were identified from reports of the HTA committees' funding decisions from 2012 to 2016. In total, 242 trials were identified of which 134 were eligible to be included in the audit. Protocols for the eligible studies were located on the NIHR Journals website, and if protocols were not available online then study managers were contacted to provide information. RESULTS: Over two-thirds (72.4%) of studies said in their protocol that they would include an internal pilot phase for their study and 37.8% of studies without an internal pilot had done an external pilot study to assess the feasibility of the full study. A typical study with an internal pilot has a target sample size of 510 over 24 months and aims to recruit one-fifth of their total target sample size within the first one-third of their recruitment time. There has been an increase in studies adopting a three-tiered structure for their progression rules in recent years, with 61.5% (16/26) of studies using the system in 2016 compared to just 11.8% (2/17) in 2015. There was also a rise in the number of studies giving a target recruitment rate in their progression criteria: 42.3% (11/26) in 2016 compared to 35.3% (6/17) in 2015. CONCLUSIONS: Progression criteria for an internal pilot are usually well specified but targets vary widely. For the actual criteria, red/amber/green systems have increased in popularity in recent years. Trials should justify the targets they have set, especially where targets are low.


Subject(s)
Randomized Controlled Trials as Topic , Humans , Medical Audit , Pilot Projects , Technology Assessment, Biomedical
20.
Pharm Stat ; 18(1): 115-122, 2019 01.
Article in English | MEDLINE | ID: mdl-30411472

ABSTRACT

For any estimate of response, confidence intervals are important as they help quantify a plausible range of values for the population response. However, there may be instances in clinical research when the population size is finite, but we wish to take a sample from the population and make inference from this sample. Instances where you can have a fixed population size include when undertaking a clinical audit of patient records or in a clinical trial a researcher could be checking for transcription errors against patient notes. In this paper, we describe how confidence interval calculations can be calculated for a finite population. These confidence intervals are narrower than confidence intervals from population samples. For the extreme case of when a 100% sample from the population is taken, there is no error and the calculation is the population response. The methods in the paper are described using a case study from clinical data management.


Subject(s)
Biostatistics/methods , Data Mining/statistics & numerical data , Databases, Factual/statistics & numerical data , Sample Size , Confidence Intervals , Data Accuracy , Data Interpretation, Statistical , Data Mining/standards , Databases, Factual/standards , Humans , Models, Statistical , Quality Control
SELECTION OF CITATIONS
SEARCH DETAIL
...