Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
Psychol Methods ; 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38829357

ABSTRACT

We demonstrate that all conventional meta-analyses of correlation coefficients are biased, explain why, and offer solutions. Because the standard errors of the correlation coefficients depend on the size of the coefficient, inverse-variance weighted averages will be biased even under ideal meta-analytical conditions (i.e., absence of publication bias, p-hacking, or other biases). Transformation to Fisher's z often greatly reduces these biases but still does not mitigate them entirely. Although all are small-sample biases (n < 200), they will often have practical consequences in psychology where the typical sample size of correlational studies is 86. We offer two solutions: the well-known Fisher's z-transformation and new small-sample adjustment of Fisher's that renders any remaining bias scientifically trivial. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Res Synth Methods ; 15(3): 500-511, 2024 May.
Article in English | MEDLINE | ID: mdl-38327122

ABSTRACT

Publication selection bias undermines the systematic accumulation of evidence. To assess the extent of this problem, we survey over 68,000 meta-analyses containing over 700,000 effect size estimates from medicine (67,386/597,699), environmental sciences (199/12,707), psychology (605/23,563), and economics (327/91,421). Our results indicate that meta-analyses in economics are the most severely contaminated by publication selection bias, closely followed by meta-analyses in environmental sciences and psychology, whereas meta-analyses in medicine are contaminated the least. After adjusting for publication selection bias, the median probability of the presence of an effect decreased from 99.9% to 29.7% in economics, from 98.9% to 55.7% in psychology, from 99.8% to 70.7% in environmental sciences, and from 38.0% to 29.7% in medicine. The median absolute effect sizes (in terms of standardized mean differences) decreased from d = 0.20 to d = 0.07 in economics, from d = 0.37 to d = 0.26 in psychology, from d = 0.62 to d = 0.43 in environmental sciences, and from d = 0.24 to d = 0.13 in medicine.


Subject(s)
Economics , Meta-Analysis as Topic , Psychology , Publication Bias , Humans , Ecology , Research Design , Selection Bias , Probability , Medicine
3.
Res Synth Methods ; 15(2): 313-325, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38342768

ABSTRACT

We demonstrate that all meta-analyses of partial correlations are biased, and yet hundreds of meta-analyses of partial correlation coefficients (PCCs) are conducted each year widely across economics, business, education, psychology, and medical research. To address these biases, we offer a new weighted average, UWLS+3 . UWLS+3 is the unrestricted weighted least squares weighted average that makes an adjustment to the degrees of freedom that are used to calculate partial correlations and, by doing so, renders trivial any remaining meta-analysis bias. Our simulations also reveal that these meta-analysis biases are small-sample biases (n < 200), and a simple correction factor of (n - 2)/(n - 1) greatly reduces these small-sample biases along with Fisher's z. In many applications where primary studies typically have hundreds or more observations, partial correlations can be meta-analyzed in standard ways with only negligible bias. However, in other fields in the social and the medical sciences that are dominated by small samples, these meta-analysis biases are easily avoidable by our proposed methods.


Subject(s)
Biomedical Research , Research Design , Bias , Least-Squares Analysis
4.
Res Synth Methods ; 14(3): 515-519, 2023 May.
Article in English | MEDLINE | ID: mdl-36880162

ABSTRACT

Partial correlation coefficients are often used as effect sizes in the meta-analysis and systematic review of multiple regression analysis research results. There are two well-known formulas for the variance and thereby for the standard error (SE) of partial correlation coefficients (PCC). One is considered the "correct" variance in the sense that it better reflects the variation of the sampling distribution of partial correlation coefficients. The second is used to test whether the population PCC is zero, and it reproduces the test statistics and the p-values of the original multiple regression coefficient that PCC is meant to represent. Simulations show that the "correct" PCC variance causes random effects to be more biased than the alternative variance formula. Meta-analyses produced by this alternative formula statistically dominate those that use "correct" SEs. Meta-analysts should never use the "correct" formula for partial correlations' standard errors.


Subject(s)
Bias , Meta-Analysis as Topic
5.
J Clin Epidemiol ; 157: 53-58, 2023 05.
Article in English | MEDLINE | ID: mdl-36889450

ABSTRACT

OBJECTIVES: To evaluate how well meta-analysis mean estimators represent reported medical research and establish which meta-analysis method is better using widely accepted model selection measures: Akaike information criterion (AIC) and Bayesian information criterion (BIC). STUDY DESIGN AND SETTING: We compiled 67,308 meta-analyses from the Cochrane Database of Systematic Reviews (CDSR) published between 1997 and 2020, collectively encompassing nearly 600,000 medical findings. We compared unrestricted weighted least squares (UWLS) vs. random effects (RE); fixed effect was also secondarily considered. RESULTS: The probability that a randomly selected systematic review from the CDSR would favor UWLS over RE is 79.4% (95% confidence interval [CI95%]: 79.1; 79.7). The odds ratio that a Cochrane systematic review would substantially favor UWLS over RE is 9.33 (CI95%: 8.94; 9.73) using the conventional criterion that a difference in AIC (or BIC) of two or larger represents a 'substantial' improvement. UWLS's advantage over RE is most prominent in the presence of low heterogeneity. However, UWLS also has a notable advantage in high heterogeneity research, across different sizes of meta-analyses and types of outcomes. CONCLUSION: UWLS frequently dominates RE in medical research, often substantially. Thus, the UWLS should be reported routinely in the meta-analysis of clinical trials.


Subject(s)
Biomedical Research , Humans , Least-Squares Analysis , Bayes Theorem , Systematic Reviews as Topic
6.
Res Synth Methods ; 14(1): 99-116, 2023 Jan.
Article in English | MEDLINE | ID: mdl-35869696

ABSTRACT

Publication bias is a ubiquitous threat to the validity of meta-analysis and the accumulation of scientific evidence. In order to estimate and counteract the impact of publication bias, multiple methods have been developed; however, recent simulation studies have shown the methods' performance to depend on the true data generating process, and no method consistently outperforms the others across a wide range of conditions. Unfortunately, when different methods lead to contradicting conclusions, researchers can choose those methods that lead to a desired outcome. To avoid the condition-dependent, all-or-none choice between competing methods and conflicting results, we extend robust Bayesian meta-analysis and model-average across two prominent approaches of adjusting for publication bias: (1) selection models of p-values and (2) models adjusting for small-study effects. The resulting model ensemble weights the estimates and the evidence for the absence/presence of the effect from the competing approaches with the support they receive from the data. Applications, simulations, and comparisons to preregistered, multi-lab replications demonstrate the benefits of Bayesian model-averaging of complementary publication bias adjustment methods.


Subject(s)
Models, Statistical , Bayes Theorem , Publication Bias , Computer Simulation , Bias
7.
Psychol Methods ; 2022 May 12.
Article in English | MEDLINE | ID: mdl-35549315

ABSTRACT

We introduce a new meta-analysis estimator, the weighted and iterated least squares (WILS), that greatly reduces publication selection bias (PSB) when selective reporting for statistical significance (SSS) is present. WILS is the simple weighted average that has smaller bias and rates of false positives than conventional meta-analysis estimators, the unrestricted weighted least squares (UWLS), and the weighted average of the adequately powered (WAAP) when there is SSS. As a simple weighted average, it is not vulnerable to violations in publication bias corrections models' assumptions too often seen in application. WILS is based on the novel idea of allowing excess statistical significance (ESS), which is a necessary condition of SSS, to identify when and how to reduce PSB. We show in comparisons with large-scale preregistered replications and in evidence-based simulations that the remaining bias is small. The routine application of WILS in the place of random effects would do much to reduce conventional meta-analysis's notable biases and high rates of false positives. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

8.
Res Synth Methods ; 13(1): 88-108, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34628722

ABSTRACT

Recent, high-profile, large-scale, preregistered failures to replicate uncover that many highly-regarded experiments are "false positives"; that is, statistically significant results of underlying null effects. Large surveys of research reveal that statistical power is often low and inadequate. When the research record includes selective reporting, publication bias and/or questionable research practices, conventional meta-analyses are also likely to be falsely positive. At the core of research credibility lies the relation of statistical power to the rate of false positives. This study finds that high (>50%-60%) median retrospective power (MRP) is associated with credible meta-analysis and large-scale, preregistered, multi-lab "successful" replications; that is, with replications that corroborate the effect in question. When median retrospective power is low (<50%), positive meta-analysis findings should be interpreted with great caution or discounted altogether.


Subject(s)
Retrospective Studies , Publication Bias
9.
Res Synth Methods ; 12(6): 776-795, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34196473

ABSTRACT

We introduce and evaluate three tests for publication selection bias based on excess statistical significance (ESS). The proposed tests incorporate heterogeneity explicitly in the formulas for expected and ESS. We calculate the expected proportion of statistically significant findings in the absence of selective reporting or publication bias based on each study's SE and meta-analysis estimates of the mean and variance of the true-effect distribution. A simple proportion of statistical significance test (PSST) compares the expected to the observed proportion of statistically significant findings. Alternatively, we propose a direct test of excess statistical significance (TESS). We also combine these two tests of excess statistical significance (TESSPSST). Simulations show that these ESS tests often outperform the conventional Egger test for publication selection bias and the three-parameter selection model (3PSM).


Subject(s)
Models, Statistical , Bias , Publication Bias , Selection Bias
10.
Psychol Bull ; 144(12): 1325-1346, 2018 12.
Article in English | MEDLINE | ID: mdl-30321017

ABSTRACT

Can recent failures to replicate psychological research be explained by typical magnitudes of statistical power, bias or heterogeneity? A large survey of 12,065 estimated effect sizes from 200 meta-analyses and nearly 8,000 papers is used to assess these key dimensions of replicability. First, our survey finds that psychological research is, on average, afflicted with low statistical power. The median of median power across these 200 areas of research is about 36%, and only about 8% of studies have adequate power (using Cohen's 80% convention). Second, the median proportion of the observed variation among reported effect sizes attributed to heterogeneity is 74% (I2). Heterogeneity of this magnitude makes it unlikely that the typical psychological study can be closely replicated when replication is defined as study-level null hypothesis significance testing. Third, the good news is that we find only a small amount of average residual reporting bias, allaying some of the often-expressed concerns about the reach of publication bias and questionable research practices. Nonetheless, the low power and high heterogeneity that our survey finds fully explain recent difficulties to replicate highly regarded psychological studies and reveal challenges for scientific progress in psychology. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Behavioral Research/standards , Data Interpretation, Statistical , Meta-Analysis as Topic , Psychology/standards , Publication Bias , Reproducibility of Results , Research Design/standards , Behavioral Research/statistics & numerical data , Humans , Psychology/statistics & numerical data , Research Design/statistics & numerical data
11.
Soc Sci Med ; 179: 9-17, 2017 04.
Article in English | MEDLINE | ID: mdl-28237460

ABSTRACT

While numerous studies assess the impact of healthcare spending on health outcomes, typically reporting multiple estimates of the elasticity of health outcomes (most often measured by a mortality rate or life expectancy) with respect to healthcare spending, the extent to which study attributes influence these elasticity estimates is unclear. Accordingly, we utilize a meta-data set (consisting of 65 studies completed over the 1969-2014 period) to examine these elasticity estimates using meta-regression analysis (MRA). Correcting for a number of issues, including publication selection bias, healthcare spending is found to have the greatest impact on the mortality rate compared to life expectancy. Indeed, conditional on several features of the literature, the spending elasticity for mortality is near -0.13, whereas it is near to +0.04 for life expectancy. MRA results reveal that the spending elasticity for the mortality rate is particularly sensitive to data aggregation, the specification of the health production function, and the nature of healthcare spending. The spending elasticity for life expectancy is particularly sensitive to the age at which life expectancy is measured, as well as the decision to control for the endogeneity of spending in the health production function. With such results in hand, we have a better understanding of how modeling choices influence results reported in this literature.


Subject(s)
Health Expenditures/statistics & numerical data , Life Expectancy , Mortality , Age Factors , Costs and Cost Analysis , Developed Countries , Humans , Outcome Assessment, Health Care , Prescription Drugs/economics , Private Sector/economics , Public Sector/economics , Regression Analysis , Residence Characteristics , Sex Factors , Socioeconomic Factors
12.
Stat Med ; 36(10): 1580-1598, 2017 05 10.
Article in English | MEDLINE | ID: mdl-28127782

ABSTRACT

The central purpose of this study is to document how a sharper focus upon statistical power may reduce the impact of selective reporting bias in meta-analyses. We introduce the weighted average of the adequately powered (WAAP) as an alternative to the conventional random-effects (RE) estimator. When the results of some of the studies have been selected to be positive and statistically significant (i.e. selective reporting), our simulations show that WAAP will have smaller bias than RE at no loss to its other statistical properties. When there is no selective reporting, the difference between RE's and WAAP's statistical properties is practically negligible. Nonetheless, when selective reporting is especially severe or heterogeneity is very large, notable bias can remain in all weighted averages. The main limitation of this approach is that the majority of meta-analyses of medical research do not contain any studies with adequate power (i.e. >80%). For such areas of medical research, it remains important to document their low power, and, as we demonstrate, an alternative unrestricted weighted least squares weighted average can be used instead of WAAP. Copyright © 2017 John Wiley & Sons, Ltd.


Subject(s)
Meta-Analysis as Topic , Publication Bias/statistics & numerical data , Biostatistics , Computer Simulation , Humans , Least-Squares Analysis , Models, Statistical , Odds Ratio , Regression Analysis
13.
Res Synth Methods ; 8(1): 19-42, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27322495

ABSTRACT

Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Least-Squares Analysis , Models, Statistical , Regression Analysis , Algorithms , Computer Simulation , Humans , Markov Chains , Publication Bias , Publishing , Research Design , Sample Size
14.
Stat Med ; 34(13): 2116-27, 2015 Jun 15.
Article in English | MEDLINE | ID: mdl-25809462

ABSTRACT

This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects.


Subject(s)
Bias , Least-Squares Analysis , Meta-Analysis as Topic , Publication Bias , Computer Simulation , Confidence Intervals , Humans , Markov Chains
15.
Res Synth Methods ; 5(1): 60-78, 2014 Mar.
Article in English | MEDLINE | ID: mdl-26054026

ABSTRACT

Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Data Interpretation, Statistical , Meta-Analysis as Topic , Models, Statistical , Publication Bias/statistics & numerical data , Regression Analysis , Clinical Trials as Topic/classification , Computer Simulation , Evidence-Based Medicine , Predictive Value of Tests
16.
J Health Econ ; 33: 67-75, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24300998

ABSTRACT

Estimates of the value of a statistical life (VSL) establish the price government agencies use to value fatality risks. Transferring these valuations to other populations often utilizes the income elasticity of the VSL, which typically draw on estimates from meta-analyses. Using a data set consisting of 101 estimates of the income elasticity of VSL from 14 previously reported meta-analyses, we find that after accounting for potential publication bias the income elasticity of value of a statistical life is clearly and robustly inelastic, with a value of approximately 0.25-0.63. There is also clear evidence of the importance of controlling for levels of risk, differential publication selection bias, and the greater income sensitivity of VSL from stated preference surveys.


Subject(s)
Income/statistics & numerical data , Publication Bias , Value of Life/economics , Humans , Meta-Analysis as Topic , Models, Statistical , Publication Bias/statistics & numerical data , Regression Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...