ABSTRACT
WHAT IS KNOWN AND OBJECTIVE: The importance of statistical power is widely recognized from a pre-trial perspective, and when interpreting results that are not statistically significant. It is less well recognized that poor power can lead to inflated estimates of the effect size when statistically significant results are observed. We use trial simulations to quantify this bias, which we term 'significant-result bias'. COMMENT: Significant-result bias is explained, and simulations are used to estimate possible significant-result bias in the rate of thrombotic events observed in the APPROVe trial. Statistically significant results, on outcomes for which there is empirical evidence of poor power, may provide inflated estimates of the size of effect. WHAT IS NEW AND CONCLUSION: If independent evidence is available to judge the likely effect size of an underpowered statistical test, trial simulations can provide a method for quantifying significant-result bias.