Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 48
Filter
Add more filters










Publication year range
1.
Proc Natl Acad Sci U S A ; 119(19): e2115128119, 2022 05 10.
Article in English | MEDLINE | ID: mdl-35512097

ABSTRACT

Prior studies of the neural representation of episodic memory in the human hippocampus have identified generic memory signals representing the categorical status of test items (novel vs. repeated), whereas other studies have identified item specific memory signals representing individual test items. Here, we report that both kinds of memory signals can be detected in hippocampal neurons in the same experiment. We recorded single-unit activity from four brain regions (hippocampus, amygdala, anterior cingulate, and prefrontal cortex) of epilepsy patients as they completed a continuous recognition task. The generic signal was found in all four brain regions, whereas the item-specific memory signal was detected only in the hippocampus and reflected sparse coding. That is, for the item-specific signal, each hippocampal neuron responded strongly to a small fraction of repeated words, and each repeated word elicited strong responding in a small fraction of neurons. The neural code was sparse, pattern-separated, and limited to the hippocampus, consistent with longstanding computational models. We suggest that the item-specific episodic memory signal in the hippocampus is fundamental, whereas the more widespread generic memory signal is derivative and is likely used by different areas of the brain to perform memory-related functions that do not require item-specific information.


Subject(s)
Epilepsy , Memory, Episodic , Hippocampus/physiology , Humans , Magnetic Resonance Imaging , Neurons/physiology
2.
Cognition ; 210: 104587, 2021 05.
Article in English | MEDLINE | ID: mdl-33508577

ABSTRACT

The label-feedback hypothesis (Lupyan, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter response times (RTs) and higher accuracy. In the present investigation, we conceptually replicated and extended their study, using additional control conditions and recording eye movements during search. Our goal was to evaluate whether self-directed speech influences target locating (i.e. attentional guidance) or object perception (i.e., distractor rejection and target appreciation). In three experiments, during object search, people spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names (all within-participants). Experiments 1 and 2 examined search RTs and accuracy: Speaking target names improved performance, without differences among the remaining conditions. Experiment 3 incorporated eye-tracking: Gaze fixation patterns suggested that language does not affect attentional guidance, but instead affects both distractor rejection and target appreciation. When search trials were conditionalized according to distractor fixations, language effects became more orderly: Search was fastest while people spoke target names, followed in linear order by the nonword, distractor-absent, and distractor-present conditions. We suggest that language affects template maintenance during search, allowing fluent differentiation of targets and distractors. Materials, data, and analyses can be retrieved here: https://osf.io/z9ex2/.


Subject(s)
Attention , Eye Movements , Feedback , Humans , Reaction Time , Visual Perception
3.
Proc Natl Acad Sci U S A ; 117(24): 13767-13770, 2020 06 16.
Article in English | MEDLINE | ID: mdl-32482860

ABSTRACT

Encoding activity in the medial temporal lobe, presumably evoked by the presentation of stimuli (postonset activity), is known to predict subsequent memory. However, several independent lines of research suggest that preonset activity also affects subsequent memory. We investigated the role of preonset and postonset single-unit and multiunit activity recorded from epilepsy patients as they completed a continuous recognition task. In this task, words were presented in a continuous series and eventually began to repeat. For each word, the patient's task was to decide whether it was novel or repeated. We found that preonset spiking activity in the hippocampus (when the word was novel) predicted subsequent memory (when the word was later repeated). Postonset activity during encoding also predicted subsequent memory, but was simply a continuation of preonset activity. The predictive effect of preonset spiking activity was much stronger in the hippocampus than in three other brain regions (amygdala, anterior cingulate, and prefrontal cortex). In addition, preonset and postonset activity around the encoding of novel words did not predict memory performance for novel words (i.e., correctly classifying the word as novel), and preonset and postonset activity around the time of retrieval did not predict memory performance for repeated words (i.e., correctly classifying the word as repeated). Thus, the only predictive effect was between preonset activity (along with its postonset continuation) at the time of encoding and subsequent memory. Taken together, these findings indicate that preonset hippocampal activity does not reflect general arousal/attention but instead reflects what we term "attention to encoding."


Subject(s)
Hippocampus/physiology , Memory , Adult , Female , Humans , Male , Prefrontal Cortex/physiology , Recognition, Psychology
4.
J Exp Psychol Hum Percept Perform ; 46(3): 274-291, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32077742

ABSTRACT

Research by Rajsic, Wilson, and Pratt (2015, 2017) suggests that people are biased to use a target-confirming strategy when performing simple visual search. In 3 experiments, we sought to determine whether another stubborn phenomenon in visual search, the low-prevalence effect (Wolfe, Horowitz, & Kenner, 2005), would modulate this confirmatory bias. We varied the reliability of the initial cue: For some people, targets usually occurred in the cued color (high prevalence). For others, targets rarely matched the cues (low prevalence). High cue-target prevalence exacerbated the confirmation bias, indexed via search response times (RTs) and eye-tracking measures. Surprisingly, given low cue-target prevalence, people remained biased to examine cue-colored letters, even though cue-colored targets were exceedingly rare. At the same time, people were more fluent at detecting the more common, cue-mismatching targets. The findings suggest that attention is guided to "confirm" the more available cued target template, but prevalence learning over time determines how fluently objects are perceptually appreciated. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Attention/physiology , Color Perception/physiology , Cues , Pattern Recognition, Visual/physiology , Space Perception/physiology , Adult , Eye Movement Measurements , Female , Humans , Male , Reaction Time/physiology , Young Adult
5.
J Eye Mov Res ; 12(6)2019 Jun 28.
Article in English | MEDLINE | ID: mdl-33828753

ABSTRACT

The methods of magicians provide powerful tools for enhancing the ecological validity of laboratory studies of attention. The current research borrows a technique from magic to explore the relationship between microsaccades and covert attention under near-natural viewing conditions. We monitored participants' eye movements as they viewed a magic trick where a coin placed beneath a napkin vanishes and reappears beneath another napkin. Many participants fail to see the coin move from one location to the other the first time around, thanks to the magician's misdirection. However, previous research was unable to distinguish whether or not participants were fooled based on their eye movements. Here, we set out to determine if microsaccades may provide a window into the efficacy of the magician's misdirection. In a multi-trial setting, participants monitored the location of the coin (which changed positions in half of the trials), while engaging in a delayed match-to-sample task at a different spatial location. Microsaccades onset times varied with task difficulty, and microsaccade directions indexed the locus of covert attention. Our combined results indicate that microsaccades may be a useful metric of covert attentional processes in applied and ecologically valid settings.

6.
Atten Percept Psychophys ; 80(5): 1240-1249, 2018 Jul.
Article in English | MEDLINE | ID: mdl-29520711

ABSTRACT

Recently, performance magic has become a source of insight into the processes underlying awareness. Magicians have highlighted a set of variables that can create moments of visual attentional suppression, which they call "off-beats." One of these variables is akin to the phenomenon psychologists know as attentional entrainment. The current experiments, inspired by performance magic, explore the extent to which entrainment can occur across sensory modalities. Across two experiments using a difficult dot probe detection task, we find that the mere presence of an auditory rhythm can bias when visual attention is deployed, speeding responses to stimuli appearing in phase with the rhythm. However, the extent of this cross-modal influence is moderated by factors such as the speed of the entrainers and whether their frequency is increasing or decreasing. In Experiment 1, entrainment occurred for rhythms presented at .67 Hz, but not at 1.5 Hz. In Experiment 2, entrainment only occurred for rhythms that were slowing from 1.5 Hz to .67 Hz, not speeding. The results of these experiments challenge current models of temporal attention.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Magic/psychology , Visual Perception/physiology , Adult , Awareness/physiology , Female , Humans , Male , Photic Stimulation/methods , Problem Solving/physiology , Reaction Time/physiology , Young Adult
7.
Proc Natl Acad Sci U S A ; 115(5): 1093-1098, 2018 01 30.
Article in English | MEDLINE | ID: mdl-29339476

ABSTRACT

Neurocomputational models have long posited that episodic memories in the human hippocampus are represented by sparse, stimulus-specific neural codes. A concomitant proposal is that when sparse-distributed neural assemblies become active, they suppress the activity of competing neurons (neural sharpening). We investigated episodic memory coding in the hippocampus and amygdala by measuring single-neuron responses from 20 epilepsy patients (12 female) undergoing intracranial monitoring while they completed a continuous recognition memory task. In the left hippocampus, the distribution of single-neuron activity indicated that only a small fraction of neurons exhibited strong responding to a given repeated word and that each repeated word elicited strong responding in a different small fraction of neurons. This finding reflects sparse distributed coding. The remaining large fraction of neurons exhibited a concurrent reduction in firing rates relative to novel words. The observed pattern accords with longstanding predictions that have previously received scant support from single-cell recordings from human hippocampus.


Subject(s)
Epilepsy/physiopathology , Hippocampus/anatomy & histology , Hippocampus/physiology , Memory, Episodic , Action Potentials/physiology , Adult , Amygdala/physiology , Behavior , Brain Mapping , Computer Simulation , Female , Humans , Male , Middle Aged , Neurons/metabolism , Neurons/physiology , Neurosciences , Temporal Lobe/physiology , Young Adult
8.
Atten Percept Psychophys ; 79(6): 1695-1725, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28508116

ABSTRACT

Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands.


Subject(s)
Attention/physiology , Eye Movements , Multilingualism , Visual Perception/physiology , Adult , Female , Humans , Male , Reaction Time/physiology , Young Adult
9.
Atten Percept Psychophys ; 78(8): 2633-2654, 2016 11.
Article in English | MEDLINE | ID: mdl-27531018

ABSTRACT

During visual search, people are distracted by objects that visually resemble search targets; search is impaired when targets and distractors share overlapping features. In this study, we examined whether a nonvisual form of similarity, overlapping object names, can also affect search performance. In three experiments, people searched for images of real-world objects (e.g., a beetle) among items whose names either all shared the same phonological onset (/bi/), or were phonologically varied. Participants either searched for 1 or 3 potential targets per trial, with search targets designated either visually or verbally. We examined standard visual search (Experiments 1 and 3) and a self-paced serial search task wherein participants manually rejected each distractor (Experiment 2). We hypothesized that people would maintain visual templates when searching for single targets, but would rely more on object names when searching for multiple items and when targets were verbally cued. This reliance on target names would make performance susceptible to interference from similar-sounding distractors. Experiments 1 and 2 showed the predicted interference effect in conditions with high memory load and verbal cues. In Experiment 3, eye-movement results showed that phonological interference resulted from small increases in dwell time to all distractors. The results suggest that distractor names are implicitly activated during search, slowing attention disengagement when targets and distractors share similar names.


Subject(s)
Attention/physiology , Names , Visual Perception/physiology , Adult , Analysis of Variance , Association , Cues , Eye Movements/physiology , Female , Humans , Linguistics , Male , Memory/physiology , Neuropsychological Tests , Perceptual Masking/physiology , Photic Stimulation/methods , Reaction Time/physiology
10.
Psychon Bull Rev ; 23(4): 959-78, 2016 08.
Article in English | MEDLINE | ID: mdl-27282990

ABSTRACT

In recent years, there has been rapidly growing interest in embodied cognition, a multifaceted theoretical proposition that (1) cognitive processes are influenced by the body, (2) cognition exists in the service of action, (3) cognition is situated in the environment, and (4) cognition may occur without internal representations. Many proponents view embodied cognition as the next great paradigm shift for cognitive science. In this article, we critically examine the core ideas from embodied cognition, taking a "thought exercise" approach. We first note that the basic principles from embodiment theory are either unacceptably vague (e.g., the premise that perception is influenced by the body) or they offer nothing new (e.g., cognition evolved to optimize survival, emotions affect cognition, perception-action couplings are important). We next suggest that, for the vast majority of classic findings in cognitive science, embodied cognition offers no scientifically valuable insight. In most cases, the theory has no logical connections to the phenomena, other than some trivially true ideas. Beyond classic laboratory findings, embodiment theory is also unable to adequately address the basic experiences of cognitive life.


Subject(s)
Cognition , Emotions , Perception , Thinking , Humans , Problem Solving , Psychological Theory
11.
J Exp Psychol Gen ; 145(3): 383-7, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26881992

ABSTRACT

Hout, Goldinger, and Ferguson (2013) critically examined the spatial arrangement method (SpAM), originally proposed by Goldstone (1994), as a fast and efficient way to collect similarity data for multidimensional scaling. We found that SpAM produced high-quality data, making it an intuitive and user-friendly alternative to the classic "pairwise" method. Verheyen, Voorspoels, Vanpaemel, and Storms (2016) reexamined our data and raised 3 caveats regarding SpAM. In this reply, we suggest that Verheyen et al. mischaracterized our reported data as representing the entire range of potential SpAM data. SpAM results might appear more nuanced with modified instructions or stimuli. By contrast, the pairwise method is inherently limited because of its laborious, serial nature. We also demonstrate that, when the methods are equated in terms of required data-collection time, SpAM is clearly superior in terms of predicting classification data. We agree that caution is required when adopting a new method but suggest that fair assessment of SpAM requires a richer data set.


Subject(s)
Data Collection/methods , Judgment , Pattern Recognition, Visual , Space Perception , Humans
12.
J Exp Psychol Gen ; 145(3): 314-37, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26726911

ABSTRACT

In spoken word perception, voice specificity effects are well-documented: When people hear repeated words in some task, performance is generally better when repeated items are presented in their originally heard voices, relative to changed voices. A key theoretical question about voice specificity effects concerns their time-course: Some studies suggest that episodic traces exert their influence late in lexical processing (the time-course hypothesis; McLennan & Luce, 2005), whereas others suggest that episodic traces influence immediate, online processing. We report 2 eye-tracking studies investigating the time-course of voice-specific priming within and across cognitive tasks. In Experiment 1, participants performed modified lexical decision or semantic classification to words spoken by 4 speakers. The tasks required participants to click a red "x" or a blue "+" located randomly within separate visual half-fields, necessitating trial-by-trial visual search with consistent half-field response mapping. After a break, participants completed a second block with new and repeated items, half spoken in changed voices. Voice effects were robust very early, appearing in saccade initiation times. Experiment 2 replicated this pattern while changing tasks across blocks, ruling out a response priming account. In the General Discussion, we address the time-course hypothesis, focusing on the challenge it presents for empirical disconfirmation, and highlighting the broad importance of indexical effects, beyond studies of priming.


Subject(s)
Eye Movements/physiology , Psychomotor Performance/physiology , Repetition Priming/physiology , Voice/physiology , Adult , Female , Humans , Male , Speech Perception/physiology , Young Adult
13.
Atten Percept Psychophys ; 78(1): 3-20, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26494381

ABSTRACT

Visual search is one of the most widely studied topics in vision science, both as an independent topic of interest, and as a tool for studying attention and visual cognition. A wide literature exists that seeks to understand how people find things under varying conditions of difficulty and complexity, and in situations ranging from the mundane (e.g., looking for one's keys) to those with significant societal importance (e.g., baggage or medical screening). A primary determinant of the ease and probability of success during search are the similarity relationships that exist in the search environment, such as the similarity between the background and the target, or the likeness of the non-targets to one another. A sense of similarity is often intuitive, but it is seldom quantified directly. This presents a problem in that similarity relationships are imprecisely specified, limiting the capacity of the researcher to examine adequately their influence. In this article, we present a novel approach to overcoming this problem that combines multi-dimensional scaling (MDS) analyses with behavioral and eye-tracking measurements. We propose a method whereby MDS can be repurposed to successfully quantify the similarity of experimental stimuli, thereby opening up theoretical questions in visual search and attention that cannot currently be addressed. These quantifications, in conjunction with behavioral and oculomotor measures, allow for critical observations about how similarity affects performance, information selection, and information processing. We provide a demonstration and tutorial of the approach, identify documented examples of its use, discuss how complementary computer vision methods could also be adopted, and close with a discussion of potential avenues for future application of this technique.


Subject(s)
Eye Movements/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Animals , Attention/physiology , Humans , Multivariate Analysis
14.
Psychon Bull Rev ; 22(6): 1739-45, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26306881

ABSTRACT

In printed-word perception, the orthographic neighborhood effect (i.e., faster recognition of words with more neighbors) has considerable theoretical importance, because it implicates great interactivity in lexical access. Mulatti, Reynolds, and Besner Journal of Experimental Psychology: Human Perception and Performance, 32, 799-810 (2006) questioned the validity of orthographic neighborhood effects, suggesting that they reflect a confound with phonological neighborhood density. They reported that, when phonological density is controlled, orthographic neighborhood effects vanish. Conversely, phonological neighborhood effects were still evident even when controlling for orthographic neighborhood density. The present study was a replication and extension of Mulatti et al. (2006), with words presented in four different formats (computer-generated print and cursive, and handwritten print and cursive). The results from Mulatti et al. (2006) were replicated with computer-generated stimuli, but were reversed with natural stimuli. These results suggest that, when ambiguity is introduced at the level of individual letters, top-down influences from lexical neighbors are increased.


Subject(s)
Linguistics , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Reading , Adult , Humans , Young Adult
15.
J Exp Psychol Hum Percept Perform ; 41(4): 1007-20, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25938253

ABSTRACT

When engaged in a visual search for two targets, participants are slower and less accurate in their responses, relative to their performance when searching for singular targets. Previous work on this "dual-target cost" has primarily focused on the breakdown of attentional guidance when looking for two items. Here, we investigated how object identification processes are affected by dual-target search. Our goal was to chart the speed at which distractors could be rejected, to assess whether dual-target search impairs object identification. To do so, we examined the capacity coefficient, which measures the speed at which decisions can be made, and provides a baseline of parallel performance against which to compare. We found that participants could search at or above this baseline, suggesting that dual-target search does not impair object identification abilities. We also found substantial differences in performance when participants were asked to search for simple versus complex images. Somewhat paradoxically, participants were able to reject complex images more rapidly than simple images. We suggest that this reflects the greater number of features that can be used to identify complex images, a finding that has important consequences for understanding object identification in visual search more generally.


Subject(s)
Attention/physiology , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Adult , Humans , Young Adult
16.
J Exp Psychol Hum Percept Perform ; 41(4): 977-94, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25915073

ABSTRACT

In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%-34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled.


Subject(s)
Eye Movements/physiology , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Adult , Humans , Young Adult
17.
J Neurosci ; 35(13): 5180-6, 2015 Apr 01.
Article in English | MEDLINE | ID: mdl-25834044

ABSTRACT

It remains unclear how single neurons in the human brain represent whole-object visual stimuli. While recordings in both human and nonhuman primates have shown distributed representations of objects (many neurons encoding multiple objects), recordings of single neurons in the human medial temporal lobe, taken as subjects' discriminated objects during multiple presentations, have shown gnostic representations (single neurons encoding one object). Because some studies suggest that repeated viewing may enhance neural selectivity for objects, we had human subjects discriminate objects in a single, more naturalistic viewing session. We found that, across 432 well isolated neurons recorded in the hippocampus and amygdala, the average fraction of objects encoded was 26%. We also found that more neurons encoded several objects versus only one object in the hippocampus (28 vs 18%, p < 0.001) and in the amygdala (30 vs 19%, p < 0.001). Thus, during realistic viewing experiences, typical neurons in the human medial temporal lobe code for a considerable range of objects, across multiple semantic categories.


Subject(s)
Amygdala/cytology , Amygdala/physiology , Hippocampus/cytology , Hippocampus/physiology , Neurons/physiology , Visual Perception/physiology , Action Potentials/physiology , Adult , Female , Humans , Male , Middle Aged , Models, Neurological , Photic Stimulation , Young Adult
18.
Atten Percept Psychophys ; 77(1): 128-49, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25214306

ABSTRACT

When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.


Subject(s)
Decision Making/physiology , Recognition, Psychology/physiology , Eye Movements/physiology , Humans , Perceptual Masking/physiology , Reaction Time , Spatial Memory/physiology
19.
PLoS One ; 9(11): e112644, 2014.
Article in English | MEDLINE | ID: mdl-25390369

ABSTRACT

Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16-17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item's prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of "sameness."


Subject(s)
Databases, Factual , Memory/physiology , Neuropsychological Tests , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Concept Formation/physiology , Humans
20.
Proc Natl Acad Sci U S A ; 111(26): 9621-6, 2014 Jul 01.
Article in English | MEDLINE | ID: mdl-24979802

ABSTRACT

Neurocomputational models hold that sparse distributed coding is the most efficient way for hippocampal neurons to encode episodic memories rapidly. We investigated the representation of episodic memory in hippocampal neurons of nine epilepsy patients undergoing intracranial monitoring as they discriminated between recently studied words (targets) and new words (foils) on a recognition test. On average, single units and multiunits exhibited higher spike counts in response to targets relative to foils, and the size of this effect correlated with behavioral performance. Further analyses of the spike-count distributions revealed that (i) a small percentage of recorded neurons responded to any one target and (ii) a small percentage of targets elicited a strong response in any one neuron. These findings are consistent with the idea that in the human hippocampus episodic memory is supported by a sparse distributed neural code.


Subject(s)
Epilepsy/physiopathology , Hippocampus/physiology , Memory, Episodic , Models, Neurological , Humans , Neurophysiological Monitoring , Neuropsychological Tests
SELECTION OF CITATIONS
SEARCH DETAIL
...