RESUMO
Conventional random-effects models in meta-analysis rely on large sample approximations instead of exact small sample results. While random-effects methods produce efficient estimates and confidence intervals for the summary effect have correct coverage when the number of studies is sufficiently large, we demonstrate that conventional methods result in confidence intervals that are not wide enough when the number of studies is small, depending on the configuration of sample sizes across studies, the degree of true heterogeneity and number of studies. We introduce two alternative variance estimators with better small sample properties, investigate degrees of freedom adjustments for computing confidence intervals, and study their effectiveness via simulation studies.
Assuntos
Modelos Estatísticos , Simulação por Computador , Tamanho da AmostraRESUMO
It is common practice in both randomized and quasi-experiments to adjust for baseline characteristics when estimating the average effect of an intervention. The inclusion of a pre-test, for example, can reduce both the standard error of this estimate and-in non-randomized designs-its bias. At the same time, it is also standard to report the effect of an intervention in standardized effect size units, thereby making it comparable to other interventions and studies. Curiously, the estimation of this effect size, including covariate adjustment, has received little attention. In this article, we provide a framework for defining effect sizes in designs with a pre-test (e.g., difference-in-differences and analysis of covariance) and propose estimators of those effect sizes. The estimators and approximations to their sampling distributions are evaluated using a simulation study and then demonstrated using an example from published data.