Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.971
Filtrar
1.
BMC Med Res Methodol ; 24(1): 226, 2024 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-39358754

RESUMEN

BACKGROUND: Whether or not to progress from a pilot study to a definitive trial is often guided by pre-specified quantitative progression criteria with three possible outcomes. Although the choice of these progression criteria will help to determine the statistical properties of the pilot trial, there is a lack of research examining how they, or the pilot sample size, should be determined. METHODS: We review three-outcome trial designs originally proposed in the phase II oncology setting and extend these to the case of external pilots, proposing a unified framework based on univariate hypothesis tests and the control of frequentist error rates. We apply this framework to an example and compare against a simple two-outcome alternative. RESULTS: We find that three-outcome designs can be used in the pilot setting, although they are not generally more efficient than simpler two-outcome alternatives. We show that three-outcome designs can help allow for other sources of information or other stakeholders to feed into progression decisions in the event of a borderline result, but this will come at the cost of a larger pilot sample size than the two-outcome case. We also show that three-outcome designs can be used to allow adjustments to be made to the intervention or trial design before commencing the definitive trial, providing the effect of the adjustment can be accurately predicted at the pilot design stage. An R package, tout, is provided to optimise progression criteria and pilot sample size. CONCLUSIONS: The proposed three-outcome framework provides a way to optimise pilot trial progression criteria and sample size in a way that leads to desired operating characteristics. It can be applied whether or not an adjustment following the pilot trial is anticipated, but will generally lead to larger sample size requirements than simpler two-outcome alternatives.


Asunto(s)
Proyectos de Investigación , Proyectos Piloto , Humanos , Tamaño de la Muestra , Progresión de la Enfermedad , Evaluación de Resultado en la Atención de Salud/métodos , Evaluación de Resultado en la Atención de Salud/estadística & datos numéricos , Ensayos Clínicos Fase II como Asunto/métodos , Ensayos Clínicos Fase II como Asunto/estadística & datos numéricos , Ensayos Clínicos como Asunto/métodos , Ensayos Clínicos como Asunto/estadística & datos numéricos , Resultado del Tratamiento
2.
Artículo en Inglés | MEDLINE | ID: mdl-39352067

RESUMEN

Over several years, the evaluation of polytomous attributes in small-sample settings has posed a challenge to the application of cognitive diagnosis models. To enhance classification precision, the support vector machine (SVM) was introduced for estimating polytomous attribution, given its proven feasibility for dichotomous cases. Two simulation studies and an empirical study assessed the impact of various factors on SVM classification performance, including training sample size, attribute structures, guessing/slipping levels, number of attributes, number of attribute levels, and number of items. The results indicated that SVM outperformed the pG-DINA model in classification accuracy under dependent attribute structures and small sample sizes. SVM performance improved with an increased number of items but declined with higher guessing/slipping levels, more attributes, and more attribute levels. Empirical data further validated the application and advantages of SVMs.

3.
BMC Med Res Methodol ; 24(1): 216, 2024 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-39333920

RESUMEN

BACKGROUND: An adaptive design allows modifying the design based on accumulated data while maintaining trial validity and integrity. The final sample size may be unknown when designing an adaptive trial. It is therefore important to consider what sample size is used in the planning of the study and how that is communicated to add transparency to the understanding of the trial design and facilitate robust planning. In this paper, we reviewed trial protocols and grant applications on the sample size reporting for randomised adaptive trials. METHOD: We searched protocols of randomised trials with comparative objectives on ClinicalTrials.gov (01/01/2010 to 31/12/2022). Contemporary eligible grant applications accessed from UK publicly funded researchers were also included. Suitable records of adaptive designs were reviewed, and key information was extracted and descriptively analysed. RESULTS: We identified 439 records, and 265 trials were eligible. Of these, 164 (61.9%) and 101 (38.1%) were sponsored by industry and public sectors, respectively, with 169 (63.8%) of all trials using a group sequential design although trial adaptations used were diverse. The maximum and minimum sample sizes were the most reported or directly inferred (n = 199, 75.1%). The sample size assuming no adaptation would be triggered was usually set as the estimated target sample size in the protocol. However, of the 152 completed trials, 15 (9.9%) and 33 (21.7%) had their sample size increased or reduced triggered by trial adaptations, respectively. The sample size calculation process was generally well reported in most cases (n = 216, 81.5%); however, the justification for the sample size calculation parameters was missing in 116 (43.8%) trials. Less than half gave sufficient information on the study design operating characteristics (n = 119, 44.9%). CONCLUSION: Although the reporting of sample sizes varied, the maximum and minimum sample sizes were usually reported. Most of the trials were planned for estimated enrolment assuming no adaptation would be triggered. This is despite the fact a third of reported trials changed their sample size. The sample size calculation was generally well reported, but the justification of sample size calculation parameters and the reporting of the statistical behaviour of the adaptive design could still be improved.


Asunto(s)
Proyectos de Investigación , Tamaño de la Muestra , Humanos , Proyectos de Investigación/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Adaptativos como Asunto/estadística & datos numéricos , Ensayos Clínicos Adaptativos como Asunto/métodos , Comunicación
4.
Implement Res Pract ; 5: 26334895241279153, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39346518

RESUMEN

Background: Despite the ubiquity of multilevel sampling, design, and analysis in mental health implementation trials, few resources are available that provide reference values of design parameters (e.g., effect size, intraclass correlation coefficient [ICC], and proportion of variance explained by covariates [covariate R 2]) needed to accurately determine sample size. The aim of this study was to provide empirical reference values for these parameters by aggregating data on implementation and clinical outcomes from multilevel implementation trials, including cluster randomized trials and individually randomized repeated measures trials, in mental health. The compendium of design parameters presented here represents plausible values that implementation scientists can use to guide sample size calculations for future trials. Method: We searched NIH RePORTER for all federally funded, multilevel implementation trials addressing mental health populations and settings from 2010 to 2020. For all continuous and binary implementation and clinical outcomes included in eligible trials, we generated values of effect size, ICC, and covariate R2 at each level via secondary analysis of trial data or via extraction of estimates from analyses in published research reports. Effect sizes were calculated as Cohen d; ICCs were generated via one-way random effects ANOVAs; covariate R2 estimates were calculated using the reduction in variance approach. Results: Seventeen trials were eligible, reporting on 53 implementation and clinical outcomes and 81 contrasts between implementation conditions. Tables of effect size, ICC, and covariate R2 are provided to guide implementation researchers in power analyses for designing multilevel implementation trials in mental health settings, including two- and three-level cluster randomized designs and unit-randomized repeated-measures designs. Conclusions: Researchers can use the empirical reference values reported in this study to develop meaningful sample size determinations for multilevel implementation trials in mental health. Discussion focuses on the application of the reference values reported in this study.


To improve the planning and execution of implementation research in mental health settings, researchers need accurate estimates of several key metrics to help determine what sample size should be obtained at each level of a multi-level study (e.g., number of patients, doctors, and clinics). These metrics include the (1) effect size, which indicates how large of a difference in the primary outcome is expected between a treatment and control group, (2) intraclass correlation coefficient, which describes how similar two people in the same group might be, and (3) covariate R 2, which indicates how much of the variability in an outcome is explained by a background variable, such as level of health at the start of a study. We collected data from mental health implementation trials conducted between 2010 and 2020. We extracted information about each of these metrics and aggregated the results for researchers to use in planning their own studies. Seventeen trials were eligible, and we were able to obtain statistical information on 53 different outcome variables from these studies. We provide a set of values which will assist in sample size calculations for future mental health implementation trials.

5.
J Clin Epidemiol ; : 111535, 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39307404

RESUMEN

OBJECTIVES: Economic evaluation outcomes are seldom taken into consideration during the process of sample size calculation in pragmatic trials. The reporting quality of sample size and information on its calculation in economic evaluations well suited to pragmatic randomized controlled trials (pRCTs) remain unknown. This study aims to assess the sample size and power of economic evaluations in pRCTs. STUDY DESIGN AND SETTING: We conducted a cross-sectional survey using data of pRCTs available from PubMed and OVID from 1 January 2010 to 24 April 2022. Two groups of independent reviewers identified articles; three groups of reviewers each extracted the data. Descriptive statistics presented the general characteristics of included studies. Statistical power analyses were performed on clinical and economic outcomes with sufficient data. RESULTS: The electronic search identified 715 studies; 152 met the inclusion criteria and, of these, 26 were available for power analysis. Only 9 out of 152 trials (5.9%) considered economic outcomes when estimating sample size, and only one adjusted the sample size accordingly. Power values for trial-based economic evaluations, and clinical trials ranged from 2.56% to 100%, 3.21% to 100%, respectively. Regardless of the perspectives, in 14 among 26 studies (53.8%), the power values of economic evaluations for quality-adjusted life years (QALYs) were lower than those of clinical trials for primary endpoints (PEs). In 11 out of 24 (45.8%) and 8 from 13 (61.5%) studies, power values of economic evaluations for QALYs were lower than those of clinical trials for PEs from the healthcare and societal perspectives, respectively. Power values of economic evaluations for non-QALYs from the healthcare and societal perspectives were potentially higher than those of clinical trials in 3 from a total of 4 studies (75%). The power values for economic outcomes in Q1 were not significantly higher than those for other journal impact factor quartile categories. CONCLUSIONS: Theoretically, pragmatic trials with concurrent economic evaluations can provide real-world evidence for healthcare decision makers. However, in pRCT-based economic evaluations, limited consideration and inadequate reporting of sample-size calculations for economic outcomes could negatively affect the results' reliability and generalisability. To avoid misleading decisions made based on study results, we recommend that future pragmatic trials with economic evaluations should report how sample sizes are determined or adjusted based on the economic outcomes in their protocols to enhance their transparency and evidence quality.

6.
ArXiv ; 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39314504

RESUMEN

Accurate sample classification using transcriptomics data is crucial for advancing personalized medicine. Achieving this goal necessitates determining a suitable sample size that ensures adequate statistical power without undue resource allocation. Current sample size calculation methods rely on assumptions and algorithms that may not align with supervised machine learning techniques for sample classification. Addressing this critical methodological gap, we present a novel computational approach that establishes the power-versus-sample-size relationship by employing a data augmentation strategy followed by fitting a learning curve. We comprehensively evaluated its performance for microRNA and RNA sequencing data, considering diverse data characteristics and algorithm configurations, based on a spectrum of evaluation metrics. To foster accessibility and reproducibility, the Python and R code for implementing our approach is available on GitHub. Its deployment will significantly facilitate the adoption of machine learning in transcriptomics studies and accelerate their translation into clinically useful classifiers for personalized treatment.

7.
Lab Anim ; 58(5): 486-492, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39315534

RESUMEN

Null hypothesis significance testing is a statistical tool commonly employed throughout laboratory animal research. When experimental results are reported, the reproducibility of the results is of utmost importance. Establishing standard, robust, and adequately powered statistical methodology in the analysis of laboratory animal data is critical to ensure reproducible and valid results. Simulation studies are a reliable method for assessing the power of statistical tests, however, biologists may not be familiar with simulation studies for power despite their efficacy and accessibility. Through an example of simulated Harlan Sprague-Dawley (HSD) rat organ weight data, we highlight the importance of conducting power analyses in laboratory animal research. Using simulations to determine statistical power prior to an experiment is a financially and ethically sound way to validate statistical tests and to help ensure reproducibility of findings in line with the 4R principles of animal welfare.


Asunto(s)
Experimentación Animal , Animales de Laboratorio , Ratas Sprague-Dawley , Animales , Experimentación Animal/estadística & datos numéricos , Ratas/fisiología , Simulación por Computador , Proyectos de Investigación , Reproducibilidad de los Resultados , Bienestar del Animal
8.
Lab Anim ; 58(5): 411-418, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39315538

RESUMEN

Animal research often involves experiments in which the effect of several factors on a particular outcome is of scientific interest. Many researchers approach such experiments by varying just one factor at a time. As a consequence, they design and analyze the experiments based on a pairwise comparison between two groups. However, this approach uses unreasonably large numbers of animals and leads to severe limitations in terms of the research questions that can be answered. Factorial designs and analyses offer a more efficient way to perform and assess experiments with multiple factors of interest. We will illustrate the basic principles behind these designs, discussing a simple example with only two factors before suggesting how to design and analyze more complex experiments involving larger numbers of factors based on multiway analysis of variance.


Asunto(s)
Proyectos de Investigación , Animales , Experimentación Animal/estadística & datos numéricos , Animales de Laboratorio
9.
Pharmacotherapy ; 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39225370

RESUMEN

This article reflects on the potential value and many pitfalls of underpowered studies to help authors and readers consider whether and how they contribute meaningfully to the published literature. A basic introduction to power and sample size calculations is provided. Several problems that can arise in analysis and publication of underpowered studies are described. In addition, features of underpowered studies that may provide value are proposed, including when the hypothesis test of interest is a limited part of the story, the data is rich enough to showcase interesting features of the population of interest, when the rarity or ubiquity of events is an important finding, and when the study is preregistered to reduce the impact of publication bias. Several reporting guidelines for underpowered studies are also suggested.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA