Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 12120, 2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38802451

RESUMO

A large amount of scientific literature in social and behavioural sciences bases their conclusions on one or more hypothesis tests. As such, it is important to obtain more knowledge about how researchers in social and behavioural sciences interpret quantities that result from hypothesis test metrics, such as p-values and Bayes factors. In the present study, we explored the relationship between obtained statistical evidence and the degree of belief or confidence that there is a positive effect in the population of interest. In particular, we were interested in the existence of a so-called cliff effect: A qualitative drop in the degree of belief that there is a positive effect around certain threshold values of statistical evidence (e.g., at p = 0.05). We compared this relationship for p-values to the relationship for corresponding degrees of evidence quantified through Bayes factors, and we examined whether this relationship was affected by two different modes of presentation (in one mode the functional form of the relationship across values was implicit to the participant, whereas in the other mode it was explicit). We found evidence for a higher proportion of cliff effects in p-value conditions than in BF conditions (N = 139), but we did not get a clear indication whether presentation mode had an effect on the proportion of cliff effects. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 2 June 2023. The protocol, as accepted by the journal, can be found at: https://doi.org/10.17605/OSF.IO/5CW6P .

2.
PLoS One ; 18(10): e0292279, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37788282

RESUMO

BACKGROUND: Publishing study results in scientific journals has been the standard way of disseminating science. However, getting results published may depend on their statistical significance. The consequence of this is that the representation of scientific knowledge might be biased. This type of bias has been called publication bias. The main objective of the present study is to get more insight into publication bias by examining it at the author, reviewer, and editor level. Additionally, we make a direct comparison between publication bias induced by authors, by reviewers, and by editors. We approached our participants by e-mail, asking them to fill out an online survey. RESULTS: Our findings suggest that statistically significant findings have a higher likelihood to be published than statistically non-significant findings, because (1) authors (n = 65) are more likely to write up and submit articles with significant results compared to articles with non-significant results (median effect size 1.10, BF10 = 1.09*107); (2) reviewers (n = 60) give more favourable reviews to articles with significant results compared to articles with non-significant results (median effect size 0.58, BF10 = 4.73*102); and (3) editors (n = 171) are more likely to accept for publication articles with significant results compared to articles with non-significant results (median effect size, 0.94, BF10 = 7.63*107). Evidence on differences in the relative contributions to publication bias by authors, reviewers, and editors is ambiguous (editors vs reviewers: BF10 = 0.31, reviewers vs authors: BF10 = 3.11, and editors vs authors: BF10 = 0.42). DISCUSSION: One of the main limitations was that rather than investigating publication bias directly, we studied potential for publication bias. Another limitation was the low response rate to the survey.


Assuntos
Autoria , Redação , Humanos , Viés de Publicação , Inquéritos e Questionários , Correio Eletrônico
3.
PLoS One ; 16(7): e0255093, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34297766

RESUMO

BACKGROUND: Following testing in clinical trials, the use of remdesivir for treatment of COVID-19 has been authorized for use in parts of the world, including the USA and Europe. Early authorizations were largely based on results from two clinical trials. A third study published by Wang et al. was underpowered and deemed inconclusive. Although regulators have shown an interest in interpreting the Wang et al. study, under a frequentist framework it is difficult to determine if the non-significant finding was caused by a lack of power or by the absence of an effect. Bayesian hypothesis testing does allow for quantification of evidence in favor of the absence of an effect. FINDINGS: Results of our Bayesian reanalysis of the three trials show ambiguous evidence for the primary outcome of clinical improvement and moderate evidence against the secondary outcome of decreased mortality rate. Additional analyses of three studies published after initial marketing approval support these findings. CONCLUSIONS: We recommend that regulatory bodies take all available evidence into account for endorsement decisions. A Bayesian approach can be beneficial, in particular in case of statistically non-significant results. This is especially pressing when limited clinical efficacy data is available.


Assuntos
Monofosfato de Adenosina/análogos & derivados , Alanina/análogos & derivados , Tratamento Farmacológico da COVID-19 , COVID-19/epidemiologia , SARS-CoV-2 , Monofosfato de Adenosina/administração & dosagem , Alanina/administração & dosagem , Ensaios Clínicos como Assunto , Europa (Continente)/epidemiologia , Humanos , Resultado do Tratamento , Estados Unidos/epidemiologia
4.
R Soc Open Sci ; 8(5): 201697, 2021 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-34017596

RESUMO

To overcome the frequently debated crisis of confidence, replicating studies is becoming increasingly more common. Multiple frequentist and Bayesian measures have been proposed to evaluate whether a replication is successful, but little is known about which method best captures replication success. This study is one of the first attempts to compare a number of quantitative measures of replication success with respect to their ability to draw the correct inference when the underlying truth is known, while taking publication bias into account. Our results show that Bayesian metrics seem to slightly outperform frequentist metrics across the board. Generally, meta-analytic approaches seem to slightly outperform metrics that evaluate single studies, except in the scenario of extreme publication bias, where this pattern reverses.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...