Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Can J Exp Psychol ; 75(1): 1-18, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33856823

ABSTRACT

In studies of false recognition, subjects not only endorse items that they have never seen, but they also make subjective judgments that they remember consciously experiencing them. This is a difficult problem for most models of recognition memory, as they propose that false memories should be based on familiarity, not recollection. We present a new computational model of recollection, based on the Recognition through Semantic Synchronization (RSS) model of Johns, Jones, & Mewhort (Cognitive Psychology, 2012, 65, 486), and fuzzy trace theory (Brainerd & Reyna, Current Directions in Psychological Science, 2002, 11, 164), that offers a solution to this problem. In addition to standard true and false recognition results, the model successfully extends to explain multiple studies on both true and false recollection. This work suggests that recollection does not have to be thought of as a separate process from recognition, but instead as one that is reliant upon different information sources. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Mental Recall , Recognition, Psychology , Humans , Judgment , Memory , Semantics
2.
Cogn Sci ; 43(5): e12730, 2019 05.
Article in English | MEDLINE | ID: mdl-31087587

ABSTRACT

Distributional models of semantics learn word meanings from contextual co-occurrence patterns across a large sample of natural language. Early models, such as LSA and HAL (Landauer & Dumais, 1997; Lund & Burgess, 1996), counted co-occurrence events; later models, such as BEAGLE (Jones & Mewhort, 2007), replaced counting co-occurrences with vector accumulation. All of these models learned from positive information only: Words that occur together within a context become related to each other. A recent class of distributional models, referred to as neural embedding models, are based on a prediction process embedded in the functioning of a neural network: Such models predict words that should surround a target word in a given context (e.g., word2vec; Mikolov, Sutskever, Chen, Corrado, & Dean, 2013). An error signal derived from the prediction is used to update each word's representation via backpropagation. However, another key difference in predictive models is their use of negative information in addition to positive information to develop a semantic representation. The models use negative examples to predict words that should not surround a word in a given context. As before, an error signal derived from the prediction prompts an update of the word's representation, a procedure referred to as negative sampling. Standard uses of word2vec recommend a greater or equal ratio of negative to positive sampling. The use of negative information in developing a representation of semantic information is often thought to be intimately associated with word2vec's prediction process. We assess the role of negative information in developing a semantic representation and show that its power does not reflect the use of a prediction mechanism. Finally, we show how negative information can be efficiently integrated into classic count-based semantic models using parameter-free analytical transformations.


Subject(s)
Language , Learning/physiology , Models, Theoretical , Humans , Machine Learning
3.
Psychol Rev ; 125(4): 592-605, 2018 07.
Article in English | MEDLINE | ID: mdl-29952624

ABSTRACT

The "law of practice"-a simple nonlinear function describing the relationship between mean response time (RT) and practice-has provided a practically and theoretically useful way of quantifying the speed-up that characterizes skill acquisition. Early work favored a power law, but this was shown to be an artifact of biases caused by averaging over participants who are individually better described by an exponential law. However, both power and exponential functions make the strong assumption that the speedup always proceeds at a steadily decreasing rate, even though there are sometimes clear exceptions. We propose a new law that can both accommodate an initial delay resulting in a slower-faster-slower rate of learning, with either power or exponential forms as limiting cases, and which can account for not only mean RT but also the effect of practice on the entire distribution of RT. We evaluate this proposal with data from a broad array of tasks using hierarchical Bayesian modeling, which pools data across participants while minimizing averaging artifacts, and using inference procedures that take into account differences in flexibility among laws. In a clear majority of paradigms our results supported a delayed exponential law. (PsycINFO Database Record


Subject(s)
Models, Psychological , Models, Statistical , Practice, Psychological , Reaction Time , Humans
4.
Cogn Psychol ; 65(4): 486-518, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22884279

ABSTRACT

We describe a computational model to explain a variety of results in both standard and false recognition. A key attribute of the model is that it uses plausible semantic representations for words, built through exposure to a linguistic corpus. A study list is encoded in the model as a gist trace, similar to the proposal of fuzzy trace theory (Brainerd & Reyna, 2002), but based on realistically structured semantic representations of the component words. The model uses a decision process based on the principles of neural synchronization and information accumulation. The decision process operates by synchronizing a probe with the gist trace of a study context, allowing information to be accumulated about whether the word did or did not occur on the study list, and the efficiency of synchronization determines recognition. We demonstrate that the model is capable of accounting for standard recognition results that are challenging for classic global memory models, and can also explain a wide variety of false recognition effects and make item-specific predictions for critical lures. The model demonstrates that both standard and false recognition results may be explained within a single formal framework by integrating realistic representation assumptions with a simple processing mechanism.


Subject(s)
Models, Psychological , Recognition, Psychology , Repression, Psychology , Humans , Psychological Theory
5.
Psychon Bull Rev ; 18(6): 1126-32, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21932142

ABSTRACT

Serial-position curves for targets in short-term recognition memory show modest primacy and marked recency. To construct serial-position curves for lures, we tested orthographic neighbours of study words and assigned each lure to the position of its studied neighbour. The curve for lures was parallel to that for targets. In Experiment 2, only half the lures were neighbours of study words; the other half overlapped a study word by a single letter. The serial-position curve for neighbours of study items was now flatter than the curve for targets. The results are inconsistent with theories in which any factor that benefits targets must hinder lures. Instead, they demand a decision mechanism that assigns a role to item-specific information, as well as to general familiarity information, such as dual-process theory.


Subject(s)
Memory, Short-Term , Recognition, Psychology , Serial Learning , Humans , Reaction Time
6.
Psychol Rev ; 114(1): 1-37, 2007 Jan.
Article in English | MEDLINE | ID: mdl-17227180

ABSTRACT

The authors present a computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language. The model uses simple convolution and superposition mechanisms (cf. B. B. Murdock, 1982) to learn distributed holographic representations for words. The structure of the resulting lexicon can account for empirical data from classic experiments studying semantic typicality, categorization, priming, and semantic constraint in sentence completions. Furthermore, order information can be retrieved from the holographic representations, allowing the model to account for limited word transitions without the need for built-in transition rules. The model demonstrates that a broad range of psychological data can be accounted for directly from the structure of lexical representations learned in this way, without the need for complexity to be built into either the processing mechanisms or the representations. The holographic representations are an appropriate knowledge representation to be used by higher order models of language comprehension, relieving the complexity required at the higher level.


Subject(s)
Holography , Semantics , Vocabulary , Association , Humans , Learning , Models, Psychological , Psychological Theory
SELECTION OF CITATIONS
SEARCH DETAIL
...