Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Psychol Assess ; 28(3): 279-93, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26214016

ABSTRACT

Clinical tests used for psychodiagnostic purposes, such as the well-known Alzheimer's Disease Assessment Scale: Cognitive subscale (ADAS-Cog), include a free-recall task. The free-recall task taps into latent cognitive processes associated with learning and memory components of human cognition, any of which might be impaired with the progression of Alzheimer's disease (AD). A Hidden Markov model of free recall is developed to measure latent cognitive processes used during the free-recall task. In return, these cognitive measurements give us insight into the degree to which normal cognitive functions are differentially impaired by medical conditions, such as AD and related disorders. The model is used to analyze the free-recall data obtained from healthy elderly participants, participants diagnosed as having mild cognitive impairment, and participants diagnosed with early AD. The model is specified hierarchically to handle item differences because of the serial position curve in free recall, as well as within-group individual differences in participants' recall abilities. Bayesian hierarchical inference is used to estimate the model. The model analysis suggests that the impaired patients have the following: (1) long-term memory encoding deficits, (2) short-term memory (STM) retrieval deficits for all but very short time intervals, (3) poorer transfer into long-term memory for items successfully retrieved from STM, and (4) poorer retention of items encoded into long-term memory after longer delays. Yet, impaired patients appear to have no deficit in immediate recall of encoded words in long-term memory or for very short time intervals in STM.


Subject(s)
Cognition Disorders/diagnosis , Cognition Disorders/physiopathology , Memory Disorders/diagnosis , Memory Disorders/physiopathology , Models, Psychological , Aged , Aged, 80 and over , Cognition Disorders/complications , Female , Humans , Longitudinal Studies , Male , Memory Disorders/complications , Memory, Short-Term/physiology , Mental Recall/physiology , Middle Aged , Neuropsychological Tests , Psychometrics , Reproducibility of Results
2.
Am J Psychol ; 128(1): 61-75, 2015.
Article in English | MEDLINE | ID: mdl-26219174

ABSTRACT

Psychological research can take a variety of directions while building on theoretical concepts that are commonly shared among the population of researchers. We investigate the question of how agreement or consensus on basic scientific concepts can be measured. Our approach to the problem is based on a state-of-the-art cognitive psychometric technique, implemented in the theoretical framework of cultural consensus theory. With this approach, consensus-based answers for questions exploring shared knowledge can be derived while basic factors of the human decision-making process are accounted for. An example of the approach is provided by examining the definition of behavior, based on responses from researchers and students. We conclude that the consensus definition of behavior is "a response by the whole individual to external or internal stimulus, influenced by the internal processes of the individual, and is typically not a developmental change." The general goal of the article is to demonstrate the utility of a cultural consensus theory-based approach as a method for investigating what current, working definitions of scientific concepts are.


Subject(s)
Concept Formation/physiology , Consensus , Models, Psychological , Psychometrics/methods , Terminology as Topic , Adult , Female , Humans , Male , Young Adult
3.
Educ Psychol Meas ; 75(1): 57-77, 2015 Feb.
Article in English | MEDLINE | ID: mdl-29795812

ABSTRACT

Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research.

4.
Psychometrika ; 80(1): 205-35, 2015 Mar.
Article in English | MEDLINE | ID: mdl-24277381

ABSTRACT

Multinomial processing tree (MPT) models are theoretically motivated stochastic models for the analysis of categorical data. Here we focus on a crossed-random effects extension of the Bayesian latent-trait pair-clustering MPT model. Our approach assumes that participant and item effects combine additively on the probit scale and postulates (multivariate) normal distributions for the random effects. We provide a WinBUGS implementation of the crossed-random effects pair-clustering model and an application to novel experimental data. The present approach may be adapted to handle other MPT models.


Subject(s)
Bayes Theorem , Data Interpretation, Statistical , Humans , Models, Psychological , Models, Statistical
5.
Psychometrika ; 80(1): 151-81, 2015 Mar.
Article in English | MEDLINE | ID: mdl-24318769

ABSTRACT

A Cultural Consensus Theory approach for ordinal data is developed, leading to a new model for ordered polytomous data. The model introduces a novel way of measuring response biases and also measures consensus item values, a consensus response scale, item difficulty, and informant knowledge. The model is extended as a finite mixture model to fit both simulated and real multicultural data, in which subgroups of informants have different sets of consensus item values. The extension is thus a form of model-based clustering for ordinal data. The hierarchical Bayesian framework is utilized for inference, and two posterior predictive checks are developed to verify the central assumptions of the model.


Subject(s)
Data Interpretation, Statistical , Models, Theoretical , Psychometrics/methods , Signal Detection, Psychological , Bayes Theorem , Humans
6.
Psychometrika ; 80(2): 341-64, 2015 Jun.
Article in English | MEDLINE | ID: mdl-24327065

ABSTRACT

Cultural Consensus Theory (CCT) models have been applied extensively across research domains in the social and behavioral sciences in order to explore shared knowledge and beliefs. CCT models operate on response data, in which the answer key is latent. The current paper develops methods to enhance the application of these models by developing the appropriate specifications for hierarchical Bayesian inference. A primary contribution is the methodology for integrating the use of covariates into CCT models. More specifically, both person- and item-related parameters are introduced as random effects that can respectively account for patterns of inter-individual and inter-item variability.


Subject(s)
Bayes Theorem , Models, Statistical , Psychometrics , Algorithms
7.
Psychol Bull ; 139(6): 1204-12, 2013 Nov.
Article in English | MEDLINE | ID: mdl-24188419

ABSTRACT

Pazzaglia, Dube, and Rotello (2013) have provided a lengthy critique of threshold and continuous models of recognition memory. Although the early pages of their article focus mostly on the problems they see with 3 vintage threshold models compared with models from signal detection theory (SDT), it becomes clear rather quickly that Pazzaglia et al. are concerned more generally with problems they see with multinomial processing tree (MPT) models. First, we focus on Pazzaglia et al.'s discussion of the evidence concerning receiver operating characteristics (ROCs) in simple recognition memory, then we consider problems they raise with a subclass of MPT models for more complex recognition memory paradigms, and finally we discuss the difference between scientific models and measurement models in the context of MPT and SDT models in general. We argue that Pazzaglia et al. have not adequately considered the evidence relevant to the viability of the simple threshold models and that they have not adequately represented the issues concerning validating a cognitive measurement model. We further argue that selective influence studies and model flexibility studies are as important as studies showing that a model can fit behavioral data. In particular, we note that despite over a half century of effort, no generally accepted scientific theory of recognition memory has emerged and that it is unlikely to ever emerge with studies using standard behavioral measures. Instead, we assert that useful measurement models of both the SDT and the MPT type have been and should continue to be developed.


Subject(s)
Memory/physiology , Mental Processes/physiology , Models, Psychological , Recognition, Psychology/physiology , Signal Detection, Psychological/physiology , Humans
8.
Behav Res Methods ; 42(3): 836-46, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20805606

ABSTRACT

Multinomial processing tree models form a popular class of statistical models for categorical data that have applications in various areas of psychological research. As in all statistical models, establishing which parameters are identified is necessary for model inference and selection on the basis of the likelihood function, and for the interpretation of the results. The required calculations to establish global identification can become intractable in complex models. We show how to establish local identification in multinomial processing tree models, based on formal methods independently proposed by Catchpole and Morgan (1997) and by Bekker, Merckens, and Wansbeek (1994). This approach is illustrated with multinomial processing tree models for the source-monitoring paradigm in memory research.


Subject(s)
Decision Trees , Models, Statistical , Algorithms , Humans , Likelihood Functions , Memory/physiology , Stochastic Processes
9.
J Math Psychol ; 54(3): 291-303, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20514139

ABSTRACT

Multinomial processing tree (MPT) modeling is a statistical methodology that has been widely and successfully applied for measuring hypothesized latent cognitive processes in selected experimental paradigms. This paper concerns model complexity of MPT models. Complexity is a key and necessary concept to consider in the evaluation and selection of quantitative models. A complex model with many parameters often overfits data beyond and above the underlying regularities, and therefore, should be appropriately penalized. It has been well established and demonstrated in multiple studies that in addition to the number of parameters, a model's functional form, which refers to the way by which parameters are combined in the model equation, can also have significant effects on complexity. Given that MPT models vary greatly in their functional forms (tree structures and parameter/category assignments), it would be of interest to evaluate their effects on complexity. Addressing this issue from the minimum description length (MDL) viewpoint, we prove a series of propositions concerning various ways in which functional form contributes to the complexity of MPT models. Computational issues of complexity are also discussed.

10.
Psychon Bull Rev ; 17(3): 275-86, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20551349

ABSTRACT

Multinomial processing tree (MPT) modeling has been widely and successfully applied as a statistical methodology for measuring hypothesized latent cognitive processes in selected experimental paradigms. In this article, we address the problem of selecting the best MPT model from a set of scientifically plausible MPT models, given observed data. We introduce a minimum description length (MDL) based model-selection approach that overcomes the limitations of existing methods such as the G(2)-based likelihood ratio test, the Akaike information criterion, and the Bayesian information criterion. To help ease the computational burden of implementing MDL, we provide a computer program in MATLAB that performs MDL-based model selection for any MPT model, with or without inequality constraints. Finally, we discuss applications of the MDL approach to well-studied MPT models with real data sets collected in two different experimental paradigms: source monitoring and pair clustering. The aforementioned MATLAB program may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.


Subject(s)
Cognition , Data Collection/statistics & numerical data , Mathematical Computing , Models, Statistical , Software , Attention , Bayes Theorem , Humans , Likelihood Functions , Recognition, Psychology , Verbal Learning
11.
Wiley Interdiscip Rev Cogn Sci ; 1(5): 759-765, 2010 Sep.
Article in English | MEDLINE | ID: mdl-26271659

ABSTRACT

Mathematical psychology is a sub-field of psychology that started in the 1950s and has continued to grow as an important contributor to formal psychological theory, especially in the cognitive areas of psychology such as learning, memory, classification, choice response time, decision making, attention, and problem solving. In addition, there are several scientific sub-areas that were originated by mathematical psychologists such as the foundations of measurement, stochastic memory models, and psychologically motivated reformulations of expected utility theory. Mathematical psychology does not include all uses of mathematics and statistics in psychology, and indeed there is a long history of such uses especially in the areas of perception and psychometrics. What is most unique about mathematical psychology is its approach to theory construction. While accepting the behaviorist dictum that the data in psychology must be observable and replicable, mathematical models are specified in terms of unobservable formal constructs that can predict detailed aspects of data across multiple experimental and natural settings. By now almost all the substantive areas of cognitive and experimental psychology have formal mathematical models and theories, and many of these are due to researchers that identify with mathematical psychology. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website.

12.
Psychon Bull Rev ; 15(4): 713-31, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18792498

ABSTRACT

In cognitive modeling, data are often categorical observations taken over participants and items. Usually subsets of these observations are pooled and analyzed by a cognitive model assuming the category counts come from a multinomial distribution with the same model parameters underlying all observations. It is well known that if there are individual differences in participants and/or items, a model analysis of the pooled data may be quite misleading, and in such cases it may be appropriate to augment the cognitive model with parametric random effects assumptions. On the other hand, if random effects are incorporated into a cognitive model that is not needed, the resulting model may be more flexible than the multinomial model that assumes no heterogeneity, and this may lead to overfitting. This article presents Monte Carlo statistical tests for directly detecting individual participant and/or item heterogeneity that depend only on the data structure itself. These tests are based on the fact that heterogeneity in participants and/or items results in overdispersion of certain category count statistics. It is argued that the methods developed in the article should be applied to any set of participant x item categorical data prior to cognitive model-based analyses.


Subject(s)
Data Collection/statistics & numerical data , Data Interpretation, Statistical , Individuality , Mental Recall , Verbal Learning , Attention , Chi-Square Distribution , Cognition , Computer Simulation , Humans , Monte Carlo Method , Paired-Associate Learning , Reading , Recognition, Psychology , Schizophrenia/diagnosis , Schizophrenic Psychology , Serial Learning
13.
Psychol Assess ; 14(2): 184-201, 2002 Jun.
Article in English | MEDLINE | ID: mdl-12056081

ABSTRACT

This article demonstrates how multinomial processing tree models can be used as assessment tools to measure cognitive deficits in clinical populations. This is illustrated with a model developed by W. H. Batchelder and D. M. Riefer (1980) that separately measures storage and retrieval processes in memory. The validity of the model is tested in 2 experiments, which show that presentation rate affects the storage of items (Experiment 1) and part-list cuing hurts item retrieval (Experiment 2). Experiments 3 and 4 examine 2 clinical populations: schizophrenics and alcoholics with organic brain damage. The model reveals that each group exhibits deficits in storage and retrieval, with the retrieval deficits being stronger and occurring more consistently over trials. Also, the alcoholics with organic brain damage show no improvement in retrieval over trials, although their storage improves at the same rate as a control group.


Subject(s)
Alcoholism/complications , Cognition Disorders/diagnosis , Psychometrics , Schizophrenia/complications , Adult , Analysis of Variance , Humans , Male , Memory/physiology , Mental Recall/physiology , Models, Psychological , Students/psychology
SELECTION OF CITATIONS
SEARCH DETAIL
...