Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 78
Filter
1.
Psychon Bull Rev ; 30(6): 2049-2066, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37450264

ABSTRACT

Individual difference exploration of cognitive domains is predicated on being able to ascertain how well performance on tasks covary. Yet, establishing correlations among common inhibition tasks such as Stroop or flanker tasks has proven quite difficult. It remains unclear whether this difficulty occurs because there truly is a lack of correlation or whether analytic techniques to localize correlations perform poorly real-world contexts because of excessive measurement error from trial noise. In this paper, we explore how well correlations may localized in large data sets with many people, tasks, and replicate trials. Using hierarchical models to separate trial noise from true individual variability, we show that trial noise in 24 extant tasks is about 8 times greater than individual variability. This degree of trial noise results in massive attenuation in correlations and instability in Spearman corrections. We then develop hierarchical models that account for variation across trials, variation across individuals, and covariation across individuals and tasks. These hierarchical models also perform poorly in localizing correlations. The advantage of these models is not in estimation efficiency, but in providing a sense of uncertainty so that researchers are less likely to misinterpret variability in their data. We discuss possible improvements to study designs to help localize correlations.


Subject(s)
Individuality , Noise , Humans , Inhibition, Psychological , Uncertainty
2.
Psychol Methods ; 28(2): 472-487, 2023 Apr.
Article in English | MEDLINE | ID: mdl-34807670

ABSTRACT

The most prominent goal when conducting a meta-analysis is to estimate the true effect size across a set of studies. This approach is problematic whenever the analyzed studies have qualitatively different results; that is, some studies show an effect in the predicted direction while others show no effect and still others show an effect in the opposite direction. In case of such qualitative differences, the average effect may be a product of different mechanisms, and therefore uninterpretable. The first question in any meta-analysis should therefore be whether all studies show an effect in the same, expected direction. To tackle this question a model with ordinal constraints is proposed where the ordinal constraint holds each study in the set. This "every study" model is compared with a set of alternative models, such as an unconstrained model that predicts effects in both directions. If the ordinal constraints hold, one underlying mechanism may suffice to explain the results from all studies, and this result could be supported by reduced between-study heterogeneity. A major implication is then that average effects become interpretable. We illustrate the model comparison approach using Carbajal et al.'s (2021) meta-analysis on the familiar-word-recognition effect, show how predictor analyses can be incorporated in the approach, and provide R-code for interested researchers. As common in meta-analysis, only surface statistics (such as effect size and sample size) are provided from each study, and the modeling approach can be adapted to suit these conditions. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

3.
J Cogn ; 4(1): 46, 2021.
Article in English | MEDLINE | ID: mdl-34514317

ABSTRACT

In this paper we propose a new set of questions that focus on the direction of effects. In almost all studies the direction is important. For example, in a Stroop task we expect responses to incongruent items to be slower than those to congruent ones, and this direction implies one theoretical explanation. Yet, if congruent words are slowed down relative to incongruent words we would have a completely different theoretical explanation. We ask a 'does everybody' question, such as, 'does every individual show a Stroop effect in the same direction?' Or, 'does every individual respond faster to loud tones than soft tones?' If all individuals truly have effects in the same direction that implicate a common theory, we term the differences among them as quantitative individual differences. Conversely, if all individuals truly have effects in different directions that implicate different theories, we term the differences among them as qualitative individual differences. Here, we provide a users guide to the question of whether individual differences are qualitative or quantitative. We discuss theoretical issues, methodological advances, new software for assessment, and, most importantly, how the question impacts theory development in cognitive science. Our hope is that this mode of analysis is a productive tool in researchers' toolkits.

4.
Behav Res Methods ; 53(1): 49-58, 2021 02.
Article in English | MEDLINE | ID: mdl-32556963

ABSTRACT

Estimating the time course of the influence of different factors in human performance is one of the major topics of research in cognitive psychology/neuroscience. Over the past decades, researchers have proposed several methods to tackle this question using latency data. Here we examine a recently proposed procedure that employs survival analyses on latency data to provide precise estimates of the timing of the first discernible influence of a given factor (e.g., word frequency on lexical access) on performance (e.g., fixation durations or response times). A number of articles have used this method in recent years, and hence an exploration of its strengths and its potential weaknesses is in order. Unfortunately, our analysis revealed that the technique has conceptual flaws, and it might lead researchers into believing that they are obtaining a measurement of processing components when, in fact, they are obtaining an uninterpretable measurement.


Subject(s)
Reading , Humans , Reaction Time
5.
Psychol Methods ; 26(1): 74-89, 2021 Feb.
Article in English | MEDLINE | ID: mdl-32437184

ABSTRACT

Mixed-effects models are becoming common in psychological science. Although they have many desirable features, there is still untapped potential. It is customary to view homogeneous variance as an assumption to satisfy. We argue to move beyond that perspective, and to view modeling within-person variance as an opportunity to gain a richer understanding of psychological processes. The technique to do so is based on the mixed-effects location scale model that can simultaneously estimate mixed-effects submodels to both the mean (location) and within-person variance (scale). We develop a framework that goes beyond assessing the submodels in isolation of one another and introduce a novel Bayesian hypothesis test for mean-variance correlations in the distribution of random effects. We first present a motivating example, which makes clear how the model can characterize mean-variance relations. We then apply the method to reaction times (RTs) gathered from 2 cognitive inhibition tasks. We find there are more individual differences in the within-person variance than the mean structure, as well as a complex web of structural mean-variance relations. This stands in contrast to the dominant view of within-person variance (i.e., "noise"). The results also point toward paradoxical within-person, as opposed to between-person, effects: several people had slower and less variable incongruent responses. This contradicts the typical pattern, wherein larger means tend to be associated with more variability. We conclude with future directions, spanning from methodological to theoretical inquires, that can be answered with the presented methodology. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Biological Variation, Individual , Models, Psychological , Models, Statistical , Psychology/methods , Psychomotor Performance , Bayes Theorem , Humans , Inhibition, Psychological , Psychomotor Performance/physiology , Reaction Time/physiology
6.
Mem Cognit ; 49(1): 46-66, 2021 01.
Article in English | MEDLINE | ID: mdl-32935326

ABSTRACT

One of the most evidential behavioral results for two memory processes comes from Gardiner and Java (Memory & Cognition, 18, 23-30 1990). Participants provided more "remember" than "know" responses for old words but more know than remember responses for old nonwords. Moreover, there was no effect of word/nonword status for new items. The combination of a crossover interaction for old items with an invariance for new items provides strong evidence for two distinct processes while ruling out criteria or bias explanations. Here, we report a modern replication of this study. In three experiments, (Experiments 1, 2, and 4) with larger numbers of items and participants, we were unable to replicate the crossover. Instead, our data are more consistent with a single-process account. In a fourth experiment (Experiment 3), we were able to replicate Gardiner and Java's baseline results with a sure-unsure paradigm supporting a single-process explanation. It seems that Gardiner and Java's remarkable crossover result is not replicable.


Subject(s)
Mental Recall , Cognition , Humans
7.
Psychon Bull Rev ; 28(3): 750-765, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33104997

ABSTRACT

The repetition-induced truth effect refers to a phenomenon where people rate repeated statements as more likely true than novel statements. In this paper, we document qualitative individual differences in the effect. While the overwhelming majority of participants display the usual positive truth effect, a minority are the opposite-they reliably discount the validity of repeated statements, what we refer to as negative truth effect. We examine eight truth-effect data sets where individual-level data are curated. These sets are composed of 1105 individuals performing 38,904 judgments. Through Bayes factor model comparison, we show that reliable negative truth effects occur in five of the eight data sets. The negative truth effect is informative because it seems unreasonable that the mechanisms mediating the positive truth effect are the same that lead to a discounting of repeated statements' validity. Moreover, the presence of qualitative differences motivates a different type of analysis of individual differences based on ordinal (i.e., Which sign does the effect have?) rather than metric measures. To our knowledge, this paper reports the first such reliable qualitative differences in a cognitive task.


Subject(s)
Individuality , Judgment/physiology , Adult , Bayes Theorem , Humans , Qualitative Research
9.
Psychol Methods ; 24(5): 606-621, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31464466

ABSTRACT

Most meta-analyses focus on the behavior of meta-analytic means. In many cases, however, this mean is difficult to defend as a construct because the underlying distribution of studies reflects many factors, including how we as researchers choose to design studies. We present an alternative goal for meta-analysis. The analyst may ask about relations that are stable across all the studies. In a typical meta-analysis, there is a hypothesized direction (e.g., that violent video games increase, rather than decrease, aggressive behavior). We ask whether all studies in a meta-analysis have true effects in the hypothesized direction. If so, this is an example of a stable relation across all the studies. We propose 4 models: (a) all studies are truly null; (b) all studies share a single true nonzero effect; (c) studies differ, but all true effects are in the same direction; and (d) some study effects are truly positive, whereas others are truly negative. We develop Bayes factor model comparison for these models and apply them to 4 extant meta-analyses to show their usefulness. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Bayes Theorem , Meta-Analysis as Topic , Models, Statistical , Psychology/methods , Humans
10.
Psychol Sci ; 30(4): 606-616, 2019 04.
Article in English | MEDLINE | ID: mdl-30843758

ABSTRACT

Researchers have suggested that acute exposure to violent video games is a cause of aggressive behavior. We tested this hypothesis by using violent and nonviolent games that were closely matched, collecting a large sample, and using a single outcome. We randomly assigned 275 male undergraduates to play a first-person-shooter game modified to be either violent or less violent and hard or easy. After completing the game-play session, participants were provoked by a confederate and given an opportunity to behave aggressively. Neither game violence nor game difficulty predicted aggressive behavior. Incidentally, we found that 2D:4D digit ratio, thought to index prenatal testosterone exposure, did not predict aggressive behavior. Results do not support acute violent-game exposure and low 2D:4D ratio as causes of aggressive behavior.


Subject(s)
Aggression , Exposure to Violence/psychology , Fingers/anatomy & histology , Video Games/adverse effects , Adolescent , Bayes Theorem , Humans , Linear Models , Male , Students , Young Adult
11.
Psychon Bull Rev ; 26(2): 452-467, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30911907

ABSTRACT

In modern individual-difference studies, researchers often correlate performance on various tasks to uncover common latent processes. Yet, in some sense, the results have been disappointing as correlations among tasks that seemingly have processes in common are often low. A pressing question then is whether these attenuated correlations reflect statistical considerations, such as a lack of individual variability on tasks, or substantive considerations, such as that inhibition in different tasks is not a unified concept. One problem in addressing this question is that researchers aggregate performance across trials to tally individual-by-task scores. It is tempting to think that aggregation is fine and that everything comes out in the wash. But as shown here, this aggregation may greatly attenuate measures of effect size and correlation. We propose an alternative analysis of task performance that is based on accounting for trial-by-trial variability along with the covariation of individuals' performance across tasks. The implementation is through common hierarchical models, and this treatment rescues classical concepts of effect size, reliability, and correlation for studying individual differences with experimental tasks. Using recent data from Hedge et al. Behavioral Research Methods, 50(3), 1166-1186, 2018 we show that there is Bayes-factor support for a lack of correlation between the Stroop and flanker task. This support for a lack of correlation indicates a psychologically relevant result-Stroop and flanker inhibition are seemingly unrelated, contradicting unified concepts of inhibition.


Subject(s)
Individuality , Psychometrics/statistics & numerical data , Task Performance and Analysis , Adult , Bayes Theorem , Correlation of Data , Humans , Inhibition, Psychological , Models, Statistical , Reaction Time/physiology , Reproducibility of Results , Stroop Test
12.
PLoS One ; 14(2): e0213461, 2019.
Article in English | MEDLINE | ID: mdl-30818364

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pone.0207239.].

13.
Psychon Bull Rev ; 26(3): 772-789, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30251148

ABSTRACT

A prevailing notion in experimental psychology is that individuals' performance in a task varies gradually in a continuous fashion. In a Stroop task, for example, the true average effect may be 50 ms with a standard deviation of say 30 ms. In this case, some individuals will have greater effects than 50 ms, some will have smaller, and some are forecasted to have negative effects in sign-they respond faster to incongruent items than to congruent ones! But are there people who have a true negative effect in Stroop or any other task? We highlight three qualitatively different effects: negative effects, null effects, and positive effects. The main goal of this paper is to develop models that allow researchers to explore whether all three are present in a task: Do all individuals show a positive effect? Are there individuals with truly no effect? Are there any individuals with negative effects? We develop a family of Bayesian hierarchical models that capture a variety of these constraints. We apply this approach to Stroop interference experiments and a near-liminal priming experiment where the prime may be below and above threshold for different people. We show that most tasks people are quite alike-for example everyone has positive Stroop effects and nobody fails to Stroop or Stroops negatively. We also show a case that under very specific circumstances, we could entice some people to not Stroop at all.


Subject(s)
Individuality , Models, Psychological , Stroop Test , Bayes Theorem , Cognition , Humans , Psychological Theory , Psychometrics , Reaction Time , Task Performance and Analysis
14.
PLoS One ; 13(11): e0207239, 2018.
Article in English | MEDLINE | ID: mdl-30475810

ABSTRACT

Sample means comparisons are a fundamental and ubiquitous approach to interpreting experimental psychological data. Yet, we argue that the sample and effect sizes in published psychological research are frequently so small that sample means are insufficiently accurate to determine whether treatment effects have occurred. Generally, an estimator should be more accurate than any benchmark that systematically ignores information about the relations among experimental conditions. We consider two such benchmark estimators: one that randomizes the relations among conditions and another that always assumes no treatment effects. We show conditions under which these benchmark estimators estimate the true parameters more accurately than sample means. This perverse situation can occur even when effects are statistically significant at traditional levels. Our argument motivates the need for regularized estimates, such as those used in lasso, ridge, and hierarchical Bayes techniques.


Subject(s)
Psychology/statistics & numerical data , Psychometrics/statistics & numerical data , Bayes Theorem , Benchmarking/statistics & numerical data , Data Accuracy , Data Interpretation, Statistical , Humans , Likelihood Functions , Models, Statistical , Research Design/statistics & numerical data , Sample Size
15.
Psychon Bull Rev ; 25(6): 2380-2388, 2018 Dec.
Article in English | MEDLINE | ID: mdl-29740762

ABSTRACT

Cognitive psychologists are familiar with how their expertise in understanding human perception, memory, and decision-making is applicable to the justice system. They may be less familiar with how their expertise in statistical decision-making and their comfort working in noisy real-world environments is just as applicable. Here we show how this expertise in ideal-observer models may be leveraged to calculate the probability of guilt of Gary Leiterman, a man convicted of murder on the basis of DNA evidence. We show by common probability theory that Leiterman is likely a victim of a tragic contamination event rather than a murderer. Making any calculation of the probability of guilt necessarily relies on subjective assumptions. The conclusion about Leiterman's innocence is not overly sensitive to the assumptions-the probability of innocence remains high for a wide range of reasonable assumptions. We note that cognitive psychologists may be well suited to make these calculations because as working scientists they may be comfortable with the role a reasonable degree of subjectivity plays in analysis.


Subject(s)
Cognitive Science , Crime/statistics & numerical data , Data Interpretation, Statistical , Expert Testimony , Forensic Psychology , Probability , Bayes Theorem , Crime/legislation & jurisprudence , Crime/psychology , Criminal Psychology , Decision Making , Humans , Law Enforcement , Male , Michigan
16.
Psychon Bull Rev ; 25(1): 102-113, 2018 02.
Article in English | MEDLINE | ID: mdl-29441460

ABSTRACT

In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.


Subject(s)
Bayes Theorem , Psychology , Humans
18.
Psychon Bull Rev ; 25(1): 35-57, 2018 02.
Article in English | MEDLINE | ID: mdl-28779455

ABSTRACT

Bayesian parameter estimation and Bayesian hypothesis testing present attractive alternatives to classical inference using confidence intervals and p values. In part I of this series we outline ten prominent advantages of the Bayesian approach. Many of these advantages translate to concrete opportunities for pragmatic researchers. For instance, Bayesian hypothesis testing allows researchers to quantify evidence and monitor its progression as data come in, without needing to know the intention with which the data were collected. We end by countering several objections to Bayesian hypothesis testing. Part II of this series discusses JASP, a free and open source software program that makes it easy to conduct Bayesian estimation and testing for a range of popular statistical scenarios (Wagenmakers et al. this issue).


Subject(s)
Bayes Theorem , Psychology , Humans , Research Design
19.
Psychon Bull Rev ; 25(1): 58-76, 2018 02.
Article in English | MEDLINE | ID: mdl-28685272

ABSTRACT

Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.


Subject(s)
Bayes Theorem , Psychology , Software , Humans , Research Design
20.
Psychol Methods ; 22(4): 779-798, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29265850

ABSTRACT

Model comparison in Bayesian mixed models is becoming popular in psychological science. Here we develop a set of nested models that account for order restrictions across individuals in psychological tasks. An order-restricted model addresses the question "Does everybody," as in "Does everybody show the usual Stroop effect," or "Does everybody respond more quickly to intense noises than subtle ones?" The crux of the modeling is the instantiation of 10s or 100s of order restrictions simultaneously, one for each participant. To our knowledge, the problem is intractable in frequentist contexts but relatively straightforward in Bayesian ones. We develop a Bayes factor model-comparison strategy using Zellner and Siow's default g-priors appropriate for assessing whether effects obey equality and order restrictions. We apply the methodology to seven data sets from Stroop, Simon, and Eriksen interference tasks. Not too surprisingly, we find that everybody Stroops-that is, for all people congruent colors are truly named more quickly than incongruent ones. But, perhaps surprisingly, we find these order constraints are violated for some people in the Simon task, that is, for these people spatially incongruent responses occur truly more quickly than congruent ones! Implications of the modeling and conjectures about the task-related differences are discussed. (PsycINFO Database Record


Subject(s)
Bayes Theorem , Data Interpretation, Statistical , Models, Statistical , Neuropsychological Tests/statistics & numerical data , Psychomotor Performance , Adult , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...