Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
1.
Neurobiol Lang (Camb) ; 4(3): 420-434, 2023.
Article in English | MEDLINE | ID: mdl-37588129

ABSTRACT

The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the visual word form area. Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using functional magnetic resonance imaging rapid adaptation techniques, we provide evidence for an auditory lexicon in the auditory word form area in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the auditory word form area. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams.

2.
J Neurosci ; 43(27): 4984-4996, 2023 07 05.
Article in English | MEDLINE | ID: mdl-37197979

ABSTRACT

It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Female , Speech , Auditory Perception , Brain , Temporal Lobe , Magnetic Resonance Imaging/methods , Acoustic Stimulation/methods
3.
Nat Hum Behav ; 4(11): 1100-1101, 2020 11.
Article in English | MEDLINE | ID: mdl-33046863

Subject(s)
Judgment , Humans
4.
Neuroimage ; 221: 117148, 2020 11 01.
Article in English | MEDLINE | ID: mdl-32659350

ABSTRACT

A number of fMRI studies have provided support for the existence of multiple concept representations in areas of the brain such as the anterior temporal lobe (ATL) and inferior parietal lobule (IPL). However, the interaction among different conceptual representations remains unclear. To better understand the dynamics of how the brain extracts meaning from sensory stimuli, we conducted a human high-density electroencephalography (EEG) study in which we first trained participants to associate pseudowords with various animal and tool concepts. After training, multivariate pattern classification of EEG signals in sensor and source space revealed the representation of both animal and tool concepts in the left ATL and tool concepts within the left IPL within 250 â€‹ms. Finally, we used Granger Causality analyses to show that orthography-selective sensors directly modulated activity in the parietal-tool selective cluster. Together, our results provide evidence for distinct but parallel "perceptual-to-conceptual" feedforward hierarchies in the brain.


Subject(s)
Association Learning/physiology , Brain Mapping/methods , Concept Formation/physiology , Electroencephalography/methods , Parietal Lobe/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Adult , Female , Humans , Male , Young Adult
5.
J Eye Mov Res ; 13(5)2020 Jun 28.
Article in English | MEDLINE | ID: mdl-33828809

ABSTRACT

Here, we provide an analysis of the microsaccades that occurred during continuous visual search and targeting of small faces that we pasted either into cluttered background photos or into a simple gray background. Subjects continuously used their eyes to target singular 3-degree upright or inverted faces in changing scenes. As soon as the participant's gaze reached the target face, a new face was displayed in a different and random location. Regardless of the experimental context (e.g. background scene, no background scene), or target eccentricity (from 4 to 20 degrees of visual angle), we found that the microsaccade rate dropped to near zero levels within only 12 milliseconds after stimulus onset. There were almost never any microsaccades after stimulus onset and before the first saccade to the face. One subject completed 118 consecutive trials without a single microsaccade. However, in about 20% of the trials, there was a single microsaccade that occurred almost immediately after the preceding saccade's offset. These microsaccades were task oriented because their facial landmark targeting distributions matched those of saccades within both the upright and inverted face conditions. Our findings show that a single feedforward pass through the visual hierarchy for each stimulus is likely all that is needed to effectuate prolonged continuous visual search. In addition, we provide evidence that microsaccades can serve perceptual functions like correcting saccades or effectuating task-oriented goals during continuous visual search.

6.
Front Comput Neurosci ; 14: 586671, 2020.
Article in English | MEDLINE | ID: mdl-33510629

ABSTRACT

Humans quickly and accurately learn new visual concepts from sparse data, sometimes just a single example. The impressive performance of artificial neural networks which hierarchically pool afferents across scales and positions suggests that the hierarchical organization of the human visual system is critical to its accuracy. These approaches, however, require magnitudes of order more examples than human learners. We used a benchmark deep learning model to show that the hierarchy can also be leveraged to vastly improve the speed of learning. We specifically show how previously learned but broadly tuned conceptual representations can be used to learn visual concepts from as few as two positive examples; reusing visual representations from earlier in the visual hierarchy, as in prior approaches, requires significantly more examples to perform comparably. These results suggest techniques for learning even more efficiently and provide a biologically plausible way to learn new visual concepts from few examples.

7.
J Vis ; 19(12): 20, 2019 10 01.
Article in English | MEDLINE | ID: mdl-31644785

ABSTRACT

The human visual system can detect objects in streams of rapidly presented images at presentation rates of 70 Hz and beyond. Yet, target detection is often impaired when multiple targets are presented in quick temporal succession. Here, we provide evidence for the hypothesis that such impairments can arise from interference between "top-down" feedback signals and the initial "bottom-up" feedforward processing of the second target. Although it is has been recently shown that feedback signals are important for visual detection, this "crash" in neural processing affected both the detection and categorization of both targets. Moreover, experimentally reducing such interference between the feedforward and feedback portions of the two targets substantially improved participants' performance. The results indicate a key role of top-down re-entrant feedback signals and show how their interference with a successive target's feedforward process determine human behavior. These results are not just relevant for our understanding of how, when, and where capacity limits in the brain's processing abilities can arise, but also have ramifications spanning topics from consciousness to learning and attention.


Subject(s)
Attention , Brain/physiology , Feedback , Visual Cortex/physiology , Visual Perception , Adolescent , Adult , Behavior , Cognition , Electrodes , Electroencephalography , Female , Humans , Learning , Male , Reproducibility of Results , Young Adult
8.
Hum Brain Mapp ; 40(10): 3078-3090, 2019 07.
Article in English | MEDLINE | ID: mdl-30920706

ABSTRACT

The grouping of sensory stimuli into categories is fundamental to cognition. Previous research in the visual and auditory systems supports a two-stage processing hierarchy that underlies perceptual categorization: (a) a "bottom-up" perceptual stage in sensory cortices where neurons show selectivity for stimulus features and (b) a "top-down" second stage in higher level cortical areas that categorizes the stimulus-selective input from the first stage. In order to test the hypothesis that the two-stage model applies to the somatosensory system, 14 human participants were trained to categorize vibrotactile stimuli presented to their right forearm. Then, during an fMRI scan, participants actively categorized the stimuli. Representational similarity analysis revealed stimulus selectivity in areas including the left precentral and postcentral gyri, the supramarginal gyrus, and the posterior middle temporal gyrus. Crucially, we identified a single category-selective region in the left ventral precentral gyrus. Furthermore, an estimation of directed functional connectivity delivered evidence for robust top-down connectivity from the second to first stage. These results support the validity of the two-stage model of perceptual categorization for the somatosensory system, suggesting common computational principles and a unified theory of perceptual categorization across the visual, auditory, and somatosensory systems.


Subject(s)
Brain/physiology , Models, Neurological , Neural Pathways/physiology , Touch Perception/physiology , Adolescent , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Vibration , Young Adult
9.
Brain Lang ; 191: 1-8, 2019 04.
Article in English | MEDLINE | ID: mdl-30721792

ABSTRACT

Typical readers rely on two brain pathways for word processing in the left hemisphere: temporo-parietal cortex (TPC) and inferior frontal cortex (IFC), thought to subserve phonological decoding, and occipito-temporal cortex (OTC), including the "visual word form area" (VWFA), thought to subserve orthographic processing. How these regions are affected in developmental dyslexia has been a topic of intense research. We employed fMRI rapid adaptation (fMRI-RA) in adults with low reading skills to examine in independently-defined functional regions of interest (ROIs) phonological selectivity to written words in left TPC and IFC, and to orthographic selectivity to written words in OTC. Consistent with the phonological deficit hypothesis of dyslexia, we found responsivity but not selectivity to phonology, as accessed by written words, in the posterior superior temporal gyrus (pSTG) of the TPC. On the other hand, we found orthographic selectivity in the VWFA of the OTC. We also found selectivity to orthographic and not phonological processing in the IFG, a finding previously reported for typical readers. Together our results demonstrate that in adults with poor reading skills, selectivity to phonology is compromised in pSTG, while selectivity to orthography in the VWFA remains unaffected at this level of processing.


Subject(s)
Brain Mapping/methods , Brain/diagnostic imaging , Dyslexia/diagnostic imaging , Magnetic Resonance Imaging/methods , Adult , Female , Humans , Male , Reading , Writing
10.
Front Hum Neurosci ; 12: 374, 2018.
Article in English | MEDLINE | ID: mdl-30333737

ABSTRACT

While several studies have shown human subjects' impressive ability to detect faces in individual images in paced settings (Crouzet et al., 2010), we here report the details of an eye movement dataset in which subjects rapidly and continuously targeted single faces embedded in different scenes at rates approaching six face targets each second (including blinks and eye movement times). In this paper, we describe details of a large publicly available eye movement dataset of this new psychophysical paradigm (Martin et al., 2018). The paradigm produced high-resolution eye-tracking data from an experiment on continuous upright and inverted 3° sized face detection in both background and no-background conditions. The new "Zapping" paradigm allowed large amounts of trials to be completed in a short amount of time. For example, our three studies encompassed a total of 288,000 trials done in 72 separate experiments, and yet only took approximately 40 hours of recording for the three experimental cohorts. Each subject did 4000 trials split into eight blocks of 500 consecutive trials in one of the four different experimental conditions: {upright, inverted} × {scene, no scene}. For each condition, there are several covariates of interest, including: temporal eye positions sampled at 1250 hz, saccades, saccade reaction times, microsaccades, pupil dynamics, target luminances, and global contrasts.

11.
Sci Rep ; 8(1): 12482, 2018 08 20.
Article in English | MEDLINE | ID: mdl-30127454

ABSTRACT

A number of studies have shown human subjects' impressive ability to detect faces in individual images, with saccade reaction times starting as fast as 100 ms after stimulus onset. Here, we report evidence that humans can rapidly and continuously saccade towards single faces embedded in different scenes at rates approaching 6 faces/scenes each second (including blinks and eye movement times). These observations are impressive, given that humans usually make no more than 2 to 5 saccades per second when searching a single scene with eye movements. Surprisingly, attempts to hide the faces by blending them into a large background scene had little effect on targeting rates, saccade reaction times, or targeting accuracy. Upright faces were found more quickly and more accurately than inverted faces; both with and without a cluttered background scene, and over a large range of eccentricities (4°-16°). The fastest subject in our study made continuous saccades to 500 small 3° upright faces at 4° eccentricities in only 96 seconds. The maximum face targeting rate ever achieved by any subject during any sequence of 7 faces during Experiment 3 for the no scene and upright face condition was 6.5 faces targeted/second. Our data provide evidence that the human visual system includes an ultra-rapid and continuous object localization system for upright faces. Furthermore, these observations indicate that continuous paradigms such as the one we have used can push humans to make remarkably fast reaction times that impose strong constraints and challenges on models of how, where, and when visual processing occurs in the human brain.


Subject(s)
Eye Movements/physiology , Face/physiology , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Adult , Brain/physiology , Female , Humans , Male , Saccades/physiology , Young Adult
12.
Neuron ; 98(2): 405-416.e4, 2018 04 18.
Article in English | MEDLINE | ID: mdl-29673483

ABSTRACT

Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain.


Subject(s)
Acoustic Stimulation/classification , Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Prefrontal Cortex/physiology , Vocalization, Animal/physiology , Adolescent , Adult , Animals , Auditory Cortex/diagnostic imaging , Female , Haplorhini , Humans , Magnetic Resonance Imaging/methods , Male , Prefrontal Cortex/diagnostic imaging , Young Adult
13.
Lang Cogn Neurosci ; 32(3): 286-294, 2017.
Article in English | MEDLINE | ID: mdl-29201934

ABSTRACT

Our recent work has shown that the Visual Word Form Area (VWFA) in left occipitotemporal cortex contains an orthographic lexicon based on neuronal representations highly selective for individual written real words (RW) and that learning novel words selectively increases neural specificity in the VWFA. But, how quickly does this change in neural tuning occur and how much training is required for new words to be codified in the VWFA? Here we present evidence that plasticity in the VWFA from broad to tight tuning can be obtained in a short time span, with no explicit training, and with comparatively few exposures, further strengthening the case for a highly plastic visual lexicon in the VWFA and for localist representations in the visual processing hierarchy.

14.
Neural Netw ; 89: 31-38, 2017 May.
Article in English | MEDLINE | ID: mdl-28324757

ABSTRACT

The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning.


Subject(s)
Cognitive Neuroscience/methods , Computer Simulation , Learning , Visual Cortex , Visual Perception , Basal Ganglia/physiology , Cognitive Neuroscience/trends , Computer Simulation/trends , Humans , Learning/physiology , Photic Stimulation/methods , Random Allocation , Visual Cortex/physiology , Visual Perception/physiology
15.
J Neurosci ; 36(39): 10089-96, 2016 09 28.
Article in English | MEDLINE | ID: mdl-27683905

ABSTRACT

UNLABELLED: The neural substrates of semantic representation have been the subject of much controversy. The study of semantic representations is complicated by difficulty in disentangling perceptual and semantic influences on neural activity, as well as in identifying stimulus-driven, "bottom-up" semantic selectivity unconfounded by top-down task-related modulations. To address these challenges, we trained human subjects to associate pseudowords (TPWs) with various animal and tool categories. To decode semantic representations of these TPWs, we used multivariate pattern classification of fMRI data acquired while subjects performed a semantic oddball detection task. Crucially, the classifier was trained and tested on disjoint sets of TPWs, so that the classifier had to use the semantic information from the training set to correctly classify the test set. Animal and tool TPWs were successfully decoded based on fMRI activity in spatially distinct subregions of the left medial anterior temporal lobe (LATL). In addition, tools (but not animals) were successfully decoded from activity in the left inferior parietal lobule. The tool-selective LATL subregion showed greater functional connectivity with left inferior parietal lobule and ventral premotor cortex, indicating that each LATL subregion exhibits distinct patterns of connectivity. Our findings demonstrate category-selective organization of semantic representations in LATL into spatially distinct subregions, continuing the lateral-medial segregation of activation in posterior temporal cortex previously observed in response to images of animals and tools, respectively. Together, our results provide evidence for segregation of processing hierarchies for different classes of objects and the existence of multiple, category-specific semantic networks in the brain. SIGNIFICANCE STATEMENT: The location and specificity of semantic representations in the brain are still widely debated. We trained human participants to associate specific pseudowords with various animal and tool categories, and used multivariate pattern classification of fMRI data to decode the semantic representations of the trained pseudowords. We found that: (1) animal and tool information was organized in category-selective subregions of medial left anterior temporal lobe (LATL); (2) tools, but not animals, were encoded in left inferior parietal lobe; and (3) LATL subregions exhibited distinct patterns of functional connectivity with category-related regions across cortex. Our findings suggest that semantic knowledge in LATL is organized in category-related subregions, providing evidence for the existence of multiple, category-specific semantic representations in the brain.


Subject(s)
Models, Neurological , Models, Statistical , Nerve Net/physiology , Pattern Recognition, Visual/physiology , Semantics , Temporal Lobe/physiology , Adult , Brain Mapping/methods , Computer Simulation , Data Interpretation, Statistical , Female , Humans , Male , Multivariate Analysis , Pattern Recognition, Automated/methods , Young Adult
16.
Neuroimage ; 138: 248-256, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27252037

ABSTRACT

Reading has been shown to rely on a dorsal brain circuit involving the temporoparietal cortex (TPC) for grapheme-to-phoneme conversion of novel words (Pugh et al., 2001), and a ventral stream involving left occipitotemporal cortex (OTC) (in particular in the so-called "visual word form area", VWFA) for visual identification of familiar words. In addition, portions of the inferior frontal cortex (IFC) have been posited to be an output of the dorsal reading pathway involved in phonology. While this dorsal versus ventral dichotomy for phonological and orthographic processing of words is widely accepted, it is not known if these brain areas are actually strictly sensitive to orthographic or phonological information. Using an fMRI rapid adaptation technique we probed the selectivity of the TPC, OTC, and IFC to orthographic and phonological features during single word reading. We found in two independent experiments using different task conditions in adult normal readers, that the TPC is exclusively sensitive to phonology and the VWFA in the OTC is exclusively sensitive to orthography. The dorsal IFC (BA 44), however, showed orthographic but not phonological selectivity. These results support the theory that reading involves a specific phonological-based temporoparietal region and a specific orthographic-based ventral occipitotemporal region. The dorsal IFC, however, was not sensitive to phonological processing, suggesting a more complex role for this region.


Subject(s)
Nerve Net/physiology , Parietal Lobe/physiology , Pattern Recognition, Visual/physiology , Phonetics , Reading , Temporal Lobe/physiology , Verbal Learning/physiology , Brain Mapping/methods , Child , Female , Humans , Magnetic Resonance Imaging/methods , Male
17.
AIDS Care ; 28(4): 436-40, 2016.
Article in English | MEDLINE | ID: mdl-26573559

ABSTRACT

The increased prevalence of HIV among adults >50 years underscores the importance of improving our understanding of mechanisms causing HIV-associated neurocognitive disorders (HAND). Identifying novel and noninvasive diagnostic predictors of HAND prior to clinical manifestation is critical to ultimately identifying means of preventing progression to symptomatic HAND. Here, using a task-switching paradigm, in which subjects were cued (unpredictably) to perform a face-gender or a word-semantic task on superimposed face and word images, we examined the behavioral and neural profile of impaired cognitive control in older HIV + adults (N = 14, 9 HIV+). Functional magnetic resonance imaging (fMRI) and behavioral data were acquired while subjects were performing the face-gender or word-semantic task. We found that, despite comparable performance in standard neuropsychology tests that are designed to probe executive deficits, HIV-infected participants were significantly slower than uninfected controls in adapting to change in task demand, and the behavioral impairments can be quantitatively related to difference in fMRI signal at the dorsal anterior cingulate cortex (ACC). Due to the limited sample size of this hypothesis-generating study, we should take caution with these findings and future studies with a large and better matched sample size are needed. However, these rather novel findings in this study have a few important implications: first, the prevalence of cognitive impairments in HIV+ older adults might be even higher than previously proposed; second, ACC (in particularly its dorsal region) might be one of the key regions underlying cognitive impairments (in particularly executive functions) in HIV; and third, it might be beneficial to adopt paradigms developed and validated in cognitive neuroscience to study HAND, as these techniques might be more sensitive to some aspects of HIV-associated neurocognitive impairments than standard neuropsychology tests.


Subject(s)
Cognition Disorders/etiology , Executive Function/physiology , HIV Infections/complications , Magnetic Resonance Imaging/methods , Adult , Aged , Cognition Disorders/diagnosis , Female , Humans , Male , Neuroimaging , Neuropsychological Tests/statistics & numerical data , Parietal Lobe/physiology , Prefrontal Cortex/physiology
18.
J Neurosci ; 35(42): 14148-59, 2015 Oct 21.
Article in English | MEDLINE | ID: mdl-26490856

ABSTRACT

The ability to recognize objects in clutter is crucial for human vision, yet the underlying neural computations remain poorly understood. Previous single-unit electrophysiology recordings in inferotemporal cortex in monkeys and fMRI studies of object-selective cortex in humans have shown that the responses to pairs of objects can sometimes be well described as a weighted average of the responses to the constituent objects. Yet, from a computational standpoint, it is not clear how the challenge of object recognition in clutter can be solved if downstream areas must disentangle the identity of an unknown number of individual objects from the confounded average neuronal responses. An alternative idea is that recognition is based on a subpopulation of neurons that are robust to clutter, i.e., that do not show response averaging, but rather robust object-selective responses in the presence of clutter. Here we show that simulations using the HMAX model of object recognition in cortex can fit the aforementioned single-unit and fMRI data, showing that the averaging-like responses can be understood as the result of responses of object-selective neurons to suboptimal stimuli. Moreover, the model shows how object recognition can be achieved by a sparse readout of neurons whose selectivity is robust to clutter. Finally, the model provides a novel prediction about human object recognition performance, namely, that target recognition ability should show a U-shaped dependency on the similarity of simultaneously presented clutter objects. This prediction is confirmed experimentally, supporting a simple, unifying model of how the brain performs object recognition in clutter. SIGNIFICANCE STATEMENT: The neural mechanisms underlying object recognition in cluttered scenes (i.e., containing more than one object) remain poorly understood. Studies have suggested that neural responses to multiple objects correspond to an average of the responses to the constituent objects. Yet, it is unclear how the identities of an unknown number of objects could be disentangled from a confounded average response. Here, we use a popular computational biological vision model to show that averaging-like responses can result from responses of clutter-tolerant neurons to suboptimal stimuli. The model also provides a novel prediction, that human detection ability should show a U-shaped dependency on target-clutter similarity, which is confirmed experimentally, supporting a simple, unifying account of how the brain performs object recognition in clutter.


Subject(s)
Brain/physiology , Vision, Ocular/physiology , Visual Pathways/physiology , Visual Perception/physiology , Adolescent , Adult , Attention , Brain/blood supply , Computer Simulation , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Models, Biological , Oxygen/blood , Pattern Recognition, Visual , Photic Stimulation , Visual Pathways/blood supply , Young Adult
19.
J Neurosci ; 35(12): 4965-72, 2015 Mar 25.
Article in English | MEDLINE | ID: mdl-25810526

ABSTRACT

The nature of orthographic representations in the human brain is still subject of much debate. Recent reports have claimed that the visual word form area (VWFA) in left occipitotemporal cortex contains an orthographic lexicon based on neuronal representations highly selective for individual written real words (RWs). This theory predicts that learning novel words should selectively increase neural specificity for these words in the VWFA. We trained subjects to recognize novel pseudowords (PWs) and used fMRI rapid adaptation to compare neural selectivity with RWs, untrained PWs (UTPWs), and trained PWs (TPWs). Before training, PWs elicited broadly tuned responses, whereas responses to RWs indicated tight tuning. After training, TPW responses resembled those of RWs, whereas UTPWs continued to show broad tuning. This change in selectivity was specific to the VWFA. Therefore, word learning appears to selectively increase neuronal specificity for the new words in the VWFA, thereby adding these words to the brain's visual dictionary.


Subject(s)
Learning/physiology , Occipital Lobe/physiology , Recognition, Psychology/physiology , Temporal Lobe/physiology , Vocabulary , Adolescent , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation , Reading , Visual Perception/physiology
20.
J Neurosci ; 34(48): 16065-75, 2014 Nov 26.
Article in English | MEDLINE | ID: mdl-25429147

ABSTRACT

Visual categorization is an essential perceptual and cognitive process for assigning behavioral significance to incoming stimuli. Categorization depends on sensory processing of stimulus features as well as flexible cognitive processing for classifying stimuli according to the current behavioral context. Neurophysiological studies suggest that the prefrontal cortex (PFC) and the inferior temporal cortex (ITC) are involved in visual shape categorization. However, their precise roles in the perceptual and cognitive aspects of the categorization process are unclear, as the two areas have not been directly compared during changing task contexts. To address this, we examined the impact of task relevance on categorization-related activity in PFC and ITC by recording from both areas as monkeys alternated between a shape categorization and passive viewing tasks. As monkeys viewed the same stimuli in both tasks, the impact of task relevance on encoding in each area could be compared. While both areas showed task-dependent modulations of neuronal activity, the patterns of results differed markedly. PFC, but not ITC, neurons showed a modest increase in firing rates when stimuli were task relevant. PFC also showed significantly stronger category selectivity during the task compared with passive viewing, while task-dependent modulations of category selectivity in ITC were weak and occurred with a long latency. Finally, both areas showed an enhancement of stimulus selectivity during the task compared with passive viewing. Together, this suggests that the ITC and PFC show differing degrees of task-dependent flexibility and are preferentially involved in the perceptual and cognitive aspects of the categorization process, respectively.


Subject(s)
Pattern Recognition, Visual/physiology , Prefrontal Cortex/physiology , Psychomotor Performance/physiology , Temporal Lobe/physiology , Visual Pathways/physiology , Animals , Female , Macaca mulatta , Photic Stimulation/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...