Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Psychol Methods ; 2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38330340

RESUMO

A fundamental part of experimental design is to determine the sample size of a study. However, sparse information about population parameters and effect sizes before data collection renders effective sample size planning challenging. Specifically, sparse information may lead research designs to be based on inaccurate a priori assumptions, causing studies to use resources inefficiently or to produce inconclusive results. Despite its deleterious impact on sample size planning, many prominent methods for experimental design fail to adequately address the challenge of sparse a-priori information. Here we propose a Bayesian Monte Carlo methodology for interim design analyses that allows researchers to analyze and adapt their sampling plans throughout the course of a study. At any point in time, the methodology uses the best available knowledge about parameters to make projections about expected evidence trajectories. Two simulated application examples demonstrate how interim design analyses can be integrated into common designs to inform sampling plans on the fly. The proposed methodology addresses the problem of sample size planning with sparse a-priori information and yields research designs that are efficient, informative, and flexible. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Psychon Bull Rev ; 31(1): 242-248, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37542014

RESUMO

Huisman (Psychonomic Bulletin & Review, 1-10. 2022) argued that a valid measure of evidence should indicate more support in favor of a true alternative hypothesis when sample size is large than when it is small. Bayes factors may violate this pattern and hence Huisman concluded that Bayes factors are invalid as a measure of evidence. In this brief comment we call attention to the following: (1) Huisman's purported anomaly is in fact dictated by probability theory; (2) Huisman's anomaly has been discussed and explained in the statistical literature since 1939; the anomaly was also highlighted in the Psychonomic Bulletin & Review article by Rouder et al. (2009), who interpreted the anomaly as "ideal": an interpretation diametrically opposed to that of Huisman. We conclude that when intuition clashes with probability theory, chances are that it is intuition that needs schooling.


Assuntos
Teorema de Bayes , Humanos , Probabilidade , Tamanho da Amostra
3.
Behav Res Methods ; 2023 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-37749423

RESUMO

With the recent development of easy-to-use tools for Bayesian analysis, psychologists have started to embrace Bayesian hierarchical modeling. Bayesian hierarchical models provide an intuitive account of inter- and intraindividual variability and are particularly suited for the evaluation of repeated-measures designs. Here, we provide guidance for model specification and interpretation in Bayesian hierarchical modeling and describe common pitfalls that can arise in the process of model fitting and evaluation. Our introduction gives particular emphasis to prior specification and prior sensitivity, as well as to the calculation of Bayes factors for model comparisons. We illustrate the use of state-of-the-art software programs Stan and brms. The result is an overview of best practices in Bayesian hierarchical modeling that we hope will aid psychologists in making the best use of Bayesian hierarchical modeling.

4.
Comput Brain Behav ; 6(1): 127-139, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36879767

RESUMO

In van Doorn et al. (2021), we outlined a series of open questions concerning Bayes factors for mixed effects model comparison, with an emphasis on the impact of aggregation, the effect of measurement error, the choice of prior distributions, and the detection of interactions. Seven expert commentaries (partially) addressed these initial questions. Surprisingly perhaps, the experts disagreed (often strongly) on what is best practice-a testament to the intricacy of conducting a mixed effect model comparison. Here, we provide our perspective on these comments and highlight topics that warrant further discussion. In general, we agree with many of the commentaries that in order to take full advantage of Bayesian mixed model comparison, it is important to be aware of the specific assumptions that underlie the to-be-compared models.

5.
R Soc Open Sci ; 10(2): 220346, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36778954

RESUMO

In many research fields, the widespread use of questionable research practices has jeopardized the credibility of scientific results. One of the most prominent questionable research practices is p-hacking. Typically, p-hacking is defined as a compound of strategies targeted at rendering non-significant hypothesis testing results significant. However, a comprehensive overview of these p-hacking strategies is missing, and current meta-scientific research often ignores the heterogeneity of strategies. Here, we compile a list of 12 p-hacking strategies based on an extensive literature review, identify factors that control their level of severity, and demonstrate their impact on false-positive rates using simulation studies. We also use our simulation results to evaluate several approaches that have been proposed to mitigate the influence of questionable research practices. Our results show that investigating p-hacking at the level of strategies can provide a better understanding of the process of p-hacking, as well as a broader basis for developing effective countermeasures. By making our analyses available through a Shiny app and R package, we facilitate future meta-scientific research aimed at investigating the ramifications of p-hacking across multiple strategies, and we hope to start a broader discussion about different manifestations of p-hacking in practice.

6.
J Sleep Res ; 32(1): e13641, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35623381

RESUMO

Symptoms of insomnia are an important risk factor for the development of mental disorders, especially during stressful life periods such as the coronavirus disease 2019 (COVID-19) pandemic. However, up to now, most studies have used cross-sectional data, and the prolonged impact of insomnia symptoms during the pandemic on later mental health remains unclear. Therefore, we investigated insomnia symptoms as a predictor of other aspects of mental health across 6 months, with altogether seven assessments (every 30 days, t0-t6), in a community sample (N = 166-267). Results showed no mean-level increase of insomnia symptoms and/or deterioration of mental health between baseline assessment (t0) and the 6- month follow-up (t6). As preregistered, higher insomnia symptoms (between persons) across all time points predicted reduced mental health at the 6-month follow-up. Interestingly, contrary to our hypothesis, higher insomnia symptoms at 1 month, within each person (i.e., compared to that person's symptoms at other time points), predicted improved rather than reduced aspects of mental health 1 month later. Hence, we replicated the predictive effect of averagely increased insomnia symptoms on impaired later mental health during the COVID-19 pandemic. However, we were surprised that increased insomnia symptoms at 1 month predicted aspects of improved mental health 1 month later. This unexpected effect might be specific for our study population and a consequence of our study design. Overall, increased insomnia symptoms may have served as a signal to engage in, and successfully implement, targeted countermeasures, which led to better short-term mental health in this healthy sample.


Assuntos
COVID-19 , Distúrbios do Início e da Manutenção do Sono , Humanos , COVID-19/epidemiologia , Saúde Mental , Pandemias , Estudos Longitudinais , Distúrbios do Início e da Manutenção do Sono/epidemiologia , Estudos Transversais , Depressão/epidemiologia , Ansiedade/epidemiologia
7.
Psychon Bull Rev ; 29(5): 1776-1794, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35378671

RESUMO

Bayesian inference requires the specification of prior distributions that quantify the pre-data uncertainty about parameter values. One way to specify prior distributions is through prior elicitation, an interview method guiding field experts through the process of expressing their knowledge in the form of a probability distribution. However, prior distributions elicited from experts can be subject to idiosyncrasies of experts and elicitation procedures, raising the spectre of subjectivity and prejudice. Here, we investigate the effect of interpersonal variation in elicited prior distributions on the Bayes factor hypothesis test. We elicited prior distributions from six academic experts with a background in different fields of psychology and applied the elicited prior distributions as well as commonly used default priors in a re-analysis of 1710 studies in psychology. The degree to which the Bayes factors vary as a function of the different prior distributions is quantified by three measures of concordance of evidence: We assess whether the prior distributions change the Bayes factor direction, whether they cause a switch in the category of evidence strength, and how much influence they have on the value of the Bayes factor. Our results show that although the Bayes factor is sensitive to changes in the prior distribution, these changes do not necessarily affect the qualitative conclusions of a hypothesis test. We hope that these results help researchers gauge the influence of interpersonal variation in elicited prior distributions in future psychological studies. Additionally, our sensitivity analyses can be used as a template for Bayesian robustness analyses that involve prior elicitation from multiple experts.


Assuntos
Projetos de Pesquisa , Teorema de Bayes , Humanos , Probabilidade , Incerteza
8.
Behav Res Methods ; 54(6): 3100-3117, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35233752

RESUMO

In a sequential hypothesis test, the analyst checks at multiple steps during data collection whether sufficient evidence has accrued to make a decision about the tested hypotheses. As soon as sufficient information has been obtained, data collection is terminated. Here, we compare two sequential hypothesis testing procedures that have recently been proposed for use in psychological research: Sequential Probability Ratio Test (SPRT; Psychological Methods, 25(2), 206-226, 2020) and the Sequential Bayes Factor Test (SBFT; Psychological Methods, 22(2), 322-339, 2017). We show that although the two methods have different philosophical roots, they share many similarities and can even be mathematically regarded as two instances of an overarching hypothesis testing framework. We demonstrate that the two methods use the same mechanisms for evidence monitoring and error control, and that differences in efficiency between the methods depend on the exact specification of the statistical models involved, as well as on the population truth. Our simulations indicate that when deciding on a sequential design within a unified sequential testing framework, researchers need to balance the needs of test efficiency, robustness against model misspecification, and appropriate uncertainty quantification. We provide guidance for navigating these design decisions based on individual preferences and simulation-based design analyses.


Assuntos
Projetos de Pesquisa , Humanos , Teorema de Bayes
9.
Psychol Methods ; 27(2): 177-197, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32940511

RESUMO

The Bayesian statistical framework requires the specification of prior distributions, which reflect predata knowledge about the relative plausibility of different parameter values. As prior distributions influence the results of Bayesian analyses, it is important to specify them with care. Prior elicitation has frequently been proposed as a principled method for deriving prior distributions based on expert knowledge. Although prior elicitation provides a theoretically satisfactory method of specifying prior distributions, there are several implicit decisions that researchers need to make at different stages of the elicitation process, each of them constituting important researcher degrees of freedom. Here, we discuss some of these decisions and group them into 3 categories: decisions about (a) the setup of the prior elicitation; (b) the core elicitation process; and (c) combination of elicited prior distributions from different experts. Importantly, different decision paths could result in greatly varying priors elicited from the same experts. Hence, researchers who wish to perform prior elicitation are advised to carefully consider each of the practical decisions before, during, and after the elicitation process. By explicitly outlining the consequences of these practical decisions, we hope to raise awareness for methodological flexibility in prior elicitation and provide researchers with a more structured approach to navigate the decision paths in prior elicitation. Making the decisions explicit also provides the foundation for further research that can identify evidence-based best practices that may eventually reduce the methodologically flexibility in prior elicitation. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Projetos de Pesquisa , Teorema de Bayes , Humanos
10.
Psychon Bull Rev ; 28(3): 813-826, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33037582

RESUMO

Despite the increasing popularity of Bayesian inference in empirical research, few practical guidelines provide detailed recommendations for how to apply Bayesian procedures and interpret the results. Here we offer specific guidelines for four different stages of Bayesian statistical reasoning in a research setting: planning the analysis, executing the analysis, interpreting the results, and reporting the results. The guidelines for each stage are illustrated with a running example. Although the guidelines are geared towards analyses performed with the open-source statistical software JASP, most guidelines extend to Bayesian inference in general.


Assuntos
Interpretação Estatística de Dados , Guias como Assunto , Modelos Estatísticos , Projetos de Pesquisa , Teorema de Bayes , Humanos
11.
Br J Math Stat Psychol ; 73 Suppl 1: 180-193, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-31691267

RESUMO

Longitudinal studies are the gold standard for research on time-dependent phenomena in the social sciences. However, they often entail high costs due to multiple measurement occasions and a long overall study duration. It is therefore useful to optimize these design factors while maintaining a high informativeness of the design. Von Oertzen and Brandmaier (2013,Psychology and Aging, 28, 414) applied power equivalence to show that Latent Growth Curve Models (LGCMs) with different design factors can have the same power for likelihood-ratio tests on the latent structure. In this paper, we show that the notion of power equivalence can be extended to Bayesian hypothesis tests of the latent structure constants. Specifically, we show that the results of a Bayes factor design analysis (BFDA; Schönbrodt & Wagenmakers (2018,Psychonomic Bulletin and Review, 25, 128) of two power equivalent LGCMs are equivalent. This will be useful for researchers who aim to plan for compelling evidence instead of frequentist power and provides a contribution towards more efficient procedures for BFDA.


Assuntos
Teorema de Bayes , Modelos Estatísticos , Simulação por Computador , Análise Fatorial , Humanos , Funções Verossimilhança , Modelos Lineares , Estudos Longitudinais , Atenção Plena/métodos , Atenção Plena/estatística & dados numéricos
12.
Behav Res Methods ; 51(3): 1042-1058, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30719688

RESUMO

Well-designed experiments are likely to yield compelling evidence with efficient sample sizes. Bayes Factor Design Analysis (BFDA) is a recently developed methodology that allows researchers to balance the informativeness and efficiency of their experiment (Schönbrodt & Wagenmakers, Psychonomic Bulletin & Review, 25(1), 128-142 2018). With BFDA, researchers can control the rate of misleading evidence but, in addition, they can plan for a target strength of evidence. BFDA can be applied to fixed-N and sequential designs. In this tutorial paper, we provide an introduction to BFDA and analyze how the use of informed prior distributions affects the results of the BFDA. We also present a user-friendly web-based BFDA application that allows researchers to conduct BFDAs with ease. Two practical examples highlight how researchers can use a BFDA to plan for informative and efficient research designs.


Assuntos
Teorema de Bayes , Análise Fatorial , Projetos de Pesquisa , Tamanho da Amostra
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...