Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
Add more filters










Publication year range
1.
Behav Brain Sci ; 46: e392, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38054329

ABSTRACT

An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.


Subject(s)
Language , Neural Networks, Computer , Humans
2.
J Cogn Neurosci ; 35(11): 1879-1897, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37590093

ABSTRACT

Humans effortlessly make quick and accurate perceptual decisions about the nature of their immediate visual environment, such as the category of the scene they face. Previous research has revealed a rich set of cortical representations potentially underlying this feat. However, it remains unknown which of these representations are suitably formatted for decision-making. Here, we approached this question empirically and computationally, using neuroimaging and computational modeling. For the empirical part, we collected EEG data and RTs from human participants during a scene categorization task (natural vs. man-made). We then related EEG data to behavior to behavior using a multivariate extension of signal detection theory. We observed a correlation between neural data and behavior specifically between ∼100 msec and ∼200 msec after stimulus onset, suggesting that the neural scene representations in this time period are suitably formatted for decision-making. For the computational part, we evaluated a recurrent convolutional neural network (RCNN) as a model of brain and behavior. Unifying our previous observations in an image-computable model, the RCNN predicted well the neural representations, the behavioral scene categorization data, as well as the relationship between them. Our results identify and computationally characterize the neural and behavioral correlates of scene categorization in humans.


Subject(s)
Brain , Pattern Recognition, Visual , Humans , Photic Stimulation/methods , Brain/diagnostic imaging , Brain Mapping/methods
3.
Nat Rev Neurosci ; 24(7): 431-450, 2023 07.
Article in English | MEDLINE | ID: mdl-37253949

ABSTRACT

Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call 'neuroconnectionism'. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.


Subject(s)
Brain , Neural Networks, Computer , Humans , Brain/physiology
4.
J Neurosci ; 43(10): 1731-1741, 2023 03 08.
Article in English | MEDLINE | ID: mdl-36759190

ABSTRACT

Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. We address this issue by asking which representational features are currently unaccounted for in neural time series data, estimated for multiple areas of the ventral stream via source-reconstructed magnetoencephalography data acquired in human participants (nine females, six males) during object viewing. We focus on the ability of visuo-semantic models, consisting of human-generated labels of object features and categories, to explain variance beyond the explanatory power of DNNs alone. We report a gradual reversal in the relative importance of DNN versus visuo-semantic features as ventral-stream object representations unfold over space and time. Although lower-level visual areas are better explained by DNN features starting early in time (at 66 ms after stimulus onset), higher-level cortical dynamics are best accounted for by visuo-semantic features starting later in time (at 146 ms after stimulus onset). Among the visuo-semantic features, object parts and basic categories drive the advantage over DNNs. These results show that a significant component of the variance unexplained by DNNs in higher-level cortical dynamics is structured and can be explained by readily nameable aspects of the objects. We conclude that current DNNs fail to fully capture dynamic representations in higher-level human visual cortex and suggest a path toward more accurate models of ventral-stream computations.SIGNIFICANCE STATEMENT When we view objects such as faces and cars in our visual environment, their neural representations dynamically unfold over time at a millisecond scale. These dynamics reflect the cortical computations that support fast and robust object recognition. DNNs have emerged as a promising framework for modeling these computations but cannot yet fully account for the neural dynamics. Using magnetoencephalography data acquired in human observers during object viewing, we show that readily nameable aspects of objects, such as 'eye', 'wheel', and 'face', can account for variance in the neural dynamics over and above DNNs. These findings suggest that DNNs and humans may in part rely on different object features for visual recognition and provide guidelines for model improvement.


Subject(s)
Pattern Recognition, Visual , Semantics , Male , Female , Humans , Neural Networks, Computer , Visual Perception , Brain , Brain Mapping/methods , Magnetic Resonance Imaging/methods
5.
Eur J Neurosci ; 56(11): 6022-6038, 2022 12.
Article in English | MEDLINE | ID: mdl-36113866

ABSTRACT

Neural mechanisms of face perception are predominantly studied in well-controlled experimental settings that involve random stimulus sequences and fixed eye positions. Although powerful, the employed paradigms are far from what constitutes natural vision. Here, we demonstrate the feasibility of ecologically more valid experimental paradigms using natural viewing behaviour, by combining a free viewing paradigm on natural scenes, free of photographer bias, with advanced data processing techniques that correct for overlap effects and co-varying non-linear dependencies of multiple eye movement parameters. We validate this approach by replicating classic N170 effects in neural responses, triggered by fixation onsets (fixation event-related potentials [fERPs]). Importantly, besides finding a strong correlation between both experiments, our more natural stimulus paradigm yielded smaller variability between subjects than the classic setup. Moving beyond classic temporal and spatial effect locations, our experiment furthermore revealed previously unknown signatures of face processing: This includes category-specific modulation of the event-related potential (ERP)'s amplitude even before fixation onset, as well as adaptation effects across subsequent fixations depending on their history.


Subject(s)
Facial Recognition , Humans , Facial Recognition/physiology , Electroencephalography/methods , Evoked Potentials/physiology , Eye Movements , Adaptation, Physiological , Photic Stimulation
6.
J Vis ; 22(2): 4, 2022 02 01.
Article in English | MEDLINE | ID: mdl-35129578

ABSTRACT

Line drawings convey meaning with just a few strokes. Despite strong simplifications, humans can recognize objects depicted in such abstracted images without effort. To what degree do deep convolutional neural networks (CNNs) mirror this human ability to generalize to abstracted object images? While CNNs trained on natural images have been shown to exhibit poor classification performance on drawings, other work has demonstrated highly similar latent representations in the networks for abstracted and natural images. Here, we address these seemingly conflicting findings by analyzing the activation patterns of a CNN trained on natural images across a set of photographs, drawings, and sketches of the same objects and comparing them to human behavior. We find a highly similar representational structure across levels of visual abstraction in early and intermediate layers of the network. This similarity, however, does not translate to later stages in the network, resulting in low classification performance for drawings and sketches. We identified that texture bias in CNNs contributes to the dissimilar representational structure in late layers and the poor performance on drawings. Finally, by fine-tuning late network layers with object drawings, we show that performance can be largely restored, demonstrating the general utility of features learned on natural images in early and intermediate layers for the recognition of drawings. In conclusion, generalization to abstracted images, such as drawings, seems to be an emergent property of CNNs trained on natural images, which is, however, suppressed by domain-related biases that arise during later processing stages in the network.


Subject(s)
Neural Networks, Computer , Visual Perception , Concept Formation , Humans , Learning , Recognition, Psychology , Visual Perception/physiology
7.
J Cogn Neurosci ; 33(10): 2044-2064, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34272948

ABSTRACT

Deep neural networks (DNNs) trained on object recognition provide the best current models of high-level visual cortex. What remains unclear is how strongly experimental choices, such as network architecture, training, and fitting to brain data, contribute to the observed similarities. Here, we compare a diverse set of nine DNN architectures on their ability to explain the representational geometry of 62 object images in human inferior temporal cortex (hIT), as measured with fMRI. We compare untrained networks to their task-trained counterparts and assess the effect of cross-validated fitting to hIT, by taking a weighted combination of the principal components of features within each layer and, subsequently, a weighted combination of layers. For each combination of training and fitting, we test all models for their correlation with the hIT representational dissimilarity matrix, using independent images and subjects. Trained models outperform untrained models (accounting for 57% more of the explainable variance), suggesting that structured visual features are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the Imagenet object-recognition task used to train the networks. The same models can also explain the disparate representations in primary visual cortex (V1), where stronger weights are given to earlier layers. In each region, all architectures achieved equivalently high performance once trained and fitted. The models' shared properties-deep feedforward hierarchies of spatially restricted nonlinear filters-seem more important than their differences, when modeling human visual representations.


Subject(s)
Neural Networks, Computer , Visual Cortex , Humans , Magnetic Resonance Imaging , Temporal Lobe/diagnostic imaging , Visual Cortex/diagnostic imaging , Visual Perception
8.
Elife ; 102021 06 28.
Article in English | MEDLINE | ID: mdl-34180395

ABSTRACT

Development and aging of the cerebral cortex show similar topographic organization and are governed by the same genes. It is unclear whether the same is true for subcortical regions, which follow fundamentally different ontogenetic and phylogenetic principles. We tested the hypothesis that genetically governed neurodevelopmental processes can be traced throughout life by assessing to which degree brain regions that develop together continue to change together through life. Analyzing over 6000 longitudinal MRIs of the brain, we used graph theory to identify five clusters of coordinated development, indexed as patterns of correlated volumetric change in brain structures. The clusters tended to follow placement along the cranial axis in embryonic brain development, suggesting continuity from prenatal stages, and correlated with cognition. Across independent longitudinal datasets, we demonstrated that developmental clusters were conserved through life. Twin-based genetic correlations revealed distinct sets of genes governing change in each cluster. Single-nucleotide polymorphisms-based analyses of 38,127 cross-sectional MRIs showed a similar pattern of genetic volume-volume correlations. In conclusion, coordination of subcortical change adheres to fundamental principles of lifespan continuity and genetic organization.


Subject(s)
Cerebral Cortex/growth & development , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Female , Humans , Longevity , Magnetic Resonance Imaging , Male , Middle Aged , Young Adult
9.
Proc Natl Acad Sci U S A ; 118(8)2021 02 23.
Article in English | MEDLINE | ID: mdl-33593900

ABSTRACT

Deep neural networks provide the current best models of visual information processing in the primate brain. Drawing on work from computer vision, the most commonly used networks are pretrained on data from the ImageNet Large Scale Visual Recognition Challenge. This dataset comprises images from 1,000 categories, selected to provide a challenging testbed for automated visual object recognition systems. Moving beyond this common practice, we here introduce ecoset, a collection of >1.5 million images from 565 basic-level categories selected to better capture the distribution of objects relevant to humans. Ecoset categories were chosen to be both frequent in linguistic usage and concrete, thereby mirroring important physical objects in the world. We test the effects of training on this ecologically more valid dataset using multiple instances of two neural network architectures: AlexNet and vNet, a novel architecture designed to mimic the progressive increase in receptive field sizes along the human ventral stream. We show that training on ecoset leads to significant improvements in predicting representations in human higher-level visual cortex and perceptual judgments, surpassing the previous state of the art. Significant and highly consistent benefits are demonstrated for both architectures on two separate functional magnetic resonance imaging (fMRI) datasets and behavioral data, jointly covering responses to 1,292 visual stimuli from a wide variety of object categories. These results suggest that computational visual neuroscience may take better advantage of the deep learning framework by using image sets that reflect the human perceptual and cognitive experience. Ecoset and trained network models are openly available to the research community.


Subject(s)
Deep Learning , Ecology , Models, Neurological , Neural Networks, Computer , Pattern Recognition, Visual , Visual Cortex/physiology , Visual Perception/physiology , Brain Mapping , Humans
10.
Cereb Cortex ; 31(4): 1953-1969, 2021 03 05.
Article in English | MEDLINE | ID: mdl-33236064

ABSTRACT

We examined whether sleep quality and quantity are associated with cortical and memory changes in cognitively healthy participants across the adult lifespan. Associations between self-reported sleep parameters (Pittsburgh Sleep Quality Index, PSQI) and longitudinal cortical change were tested using five samples from the Lifebrain consortium (n = 2205, 4363 MRIs, 18-92 years). In additional analyses, we tested coherence with cell-specific gene expression maps from the Allen Human Brain Atlas, and relations to changes in memory performance. "PSQI # 1 Subjective sleep quality" and "PSQI #5 Sleep disturbances" were related to thinning of the right lateral temporal cortex, with lower quality and more disturbances being associated with faster thinning. The association with "PSQI #5 Sleep disturbances" emerged after 60 years, especially in regions with high expression of genes related to oligodendrocytes and S1 pyramidal neurons. None of the sleep scales were related to a longitudinal change in episodic memory function, suggesting that sleep-related cortical changes were independent of cognitive decline. The relationship to cortical brain change suggests that self-reported sleep parameters are relevant in lifespan studies, but small effect sizes indicate that self-reported sleep is not a good biomarker of general cortical degeneration in healthy older adults.


Subject(s)
Aging/pathology , Cerebral Cortical Thinning/diagnostic imaging , Longevity , Memory Disorders/diagnostic imaging , Self Report , Sleep Wake Disorders/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Aging/psychology , Cerebral Cortical Thinning/epidemiology , Cerebral Cortical Thinning/psychology , Cognitive Dysfunction/diagnostic imaging , Cognitive Dysfunction/pathology , Cognitive Dysfunction/psychology , Female , Humans , Longevity/physiology , Longitudinal Studies , Magnetic Resonance Imaging/trends , Male , Memory Disorders/epidemiology , Memory Disorders/psychology , Middle Aged , Sleep Quality , Sleep Wake Disorders/epidemiology , Sleep Wake Disorders/psychology , Young Adult
11.
Nat Commun ; 11(1): 5725, 2020 11 12.
Article in English | MEDLINE | ID: mdl-33184286

ABSTRACT

Deep neural networks (DNNs) excel at visual recognition tasks and are increasingly used as a modeling framework for neural computations in the primate brain. Just like individual brains, each DNN has a unique connectivity and representational profile. Here, we investigate individual differences among DNN instances that arise from varying only the random initialization of the network weights. Using tools typically employed in systems neuroscience, we show that this minimal change in initial conditions prior to training leads to substantial differences in intermediate and higher-level network representations despite similar network-level classification performance. We locate the origins of the effects in an under-constrained alignment of category exemplars, rather than misaligned category centroids. These results call into question the common practice of using single networks to derive insights into neural information processing and rather suggest that computational neuroscientists working with DNNs may need to base their inferences on groups of multiple network instances.


Subject(s)
Cognitive Neuroscience/methods , Individuality , Neural Networks, Computer , Animals , Brain
12.
PLoS Comput Biol ; 16(10): e1008215, 2020 10.
Article in English | MEDLINE | ID: mdl-33006992

ABSTRACT

Deep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain or model. Here we show: (1) Recurrent convolutional neural network models outperform feedforward convolutional models matched in their number of parameters in large-scale visual recognition tasks on natural images. (2) Setting a confidence threshold, at which recurrent computations terminate and a decision is made, enables flexible trading of speed for accuracy. At a given confidence threshold, the model expends more time and energy on images that are harder to recognise, without requiring additional parameters for deeper computations. (3) The recurrent model's reaction time for an image predicts the human reaction time for the same image better than several parameter-matched and state-of-the-art feedforward models. (4) Across confidence thresholds, the recurrent model emulates the behaviour of feedforward control models in that it achieves the same accuracy at approximately the same computational cost (mean number of floating-point operations). However, the recurrent model can be run longer (higher confidence threshold) and then outperforms parameter-matched feedforward comparison models. These results suggest that recurrent connectivity, a hallmark of biological visual systems, may be essential for understanding the accuracy, flexibility, and dynamics of human visual recognition.


Subject(s)
Models, Neurological , Neural Networks, Computer , Reaction Time/physiology , Vision, Ocular/physiology , Visual Perception/physiology , Adult , Computational Biology , Female , Humans , Male , Young Adult
13.
Sleep ; 43(5)2020 05 12.
Article in English | MEDLINE | ID: mdl-31738420

ABSTRACT

OBJECTIVES: Poor sleep is associated with multiple age-related neurodegenerative and neuropsychiatric conditions. The hippocampus plays a special role in sleep and sleep-dependent cognition, and accelerated hippocampal atrophy is typically seen with higher age. Hence, it is critical to establish how the relationship between sleep and hippocampal volume loss unfolds across the adult lifespan. METHODS: Self-reported sleep measures and MRI-derived hippocampal volumes were obtained from 3105 cognitively normal participants (18-90 years) from major European brain studies in the Lifebrain consortium. Hippocampal volume change was estimated from 5116 MRIs from 1299 participants for whom longitudinal MRIs were available, followed up to 11 years with a mean interval of 3.3 years. Cross-sectional analyses were repeated in a sample of 21,390 participants from the UK Biobank. RESULTS: No cross-sectional sleep-hippocampal volume relationships were found. However, worse sleep quality, efficiency, problems, and daytime tiredness were related to greater hippocampal volume loss over time, with high scorers showing 0.22% greater annual loss than low scorers. The relationship between sleep and hippocampal atrophy did not vary across age. Simulations showed that the observed longitudinal effects were too small to be detected as age-interactions in the cross-sectional analyses. CONCLUSIONS: Worse self-reported sleep is associated with higher rates of hippocampal volume decline across the adult lifespan. This suggests that sleep is relevant to understand individual differences in hippocampal atrophy, but limited effect sizes call for cautious interpretation.


Subject(s)
Hippocampus , Longevity , Adult , Atrophy/diagnostic imaging , Atrophy/pathology , Cross-Sectional Studies , Hippocampus/diagnostic imaging , Hippocampus/pathology , Humans , Magnetic Resonance Imaging , Self Report , Sleep
14.
Proc Natl Acad Sci U S A ; 116(43): 21854-21863, 2019 10 22.
Article in English | MEDLINE | ID: mdl-31591217

ABSTRACT

The human visual system is an intricate network of brain regions that enables us to recognize the world around us. Despite its abundant lateral and feedback connections, object processing is commonly viewed and studied as a feedforward process. Here, we measure and model the rapid representational dynamics across multiple stages of the human ventral stream using time-resolved brain imaging and deep learning. We observe substantial representational transformations during the first 300 ms of processing within and across ventral-stream regions. Categorical divisions emerge in sequence, cascading forward and in reverse across regions, and Granger causality analysis suggests bidirectional information flow between regions. Finally, recurrent deep neural network models clearly outperform parameter-matched feedforward models in terms of their ability to capture the multiregion cortical dynamics. Targeted virtual cooling experiments on the recurrent deep network models further substantiate the importance of their lateral and top-down connections. These results establish that recurrent models are required to understand information processing in the human ventral stream.


Subject(s)
Models, Neurological , Visual Perception/physiology , Adult , Deep Learning , Feedback, Sensory , Female , Humans , Magnetoencephalography , Nerve Net , Visual Pathways
15.
Sci Data ; 4: 160126, 2017 01 31.
Article in English | MEDLINE | ID: mdl-28140391

ABSTRACT

We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7-80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.


Subject(s)
Eye Movements , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Attention , Child , Humans , Male , Middle Aged , Visual Perception , Young Adult
16.
Cereb Cortex ; 27(1): 279-293, 2017 01 01.
Article in English | MEDLINE | ID: mdl-28077512

ABSTRACT

Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans.


Subject(s)
Eye Movements/physiology , Species Specificity , Adolescent , Adult , Aged , Aged, 80 and over , Animals , Child , Female , Humans , Macaca mulatta , Male , Middle Aged , Photic Stimulation , Visual Perception/physiology , Young Adult
17.
J Cogn Neurosci ; 29(4): 637-651, 2017 Apr.
Article in English | MEDLINE | ID: mdl-27791433

ABSTRACT

Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another person's attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all viewpoints. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demonstrated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity.


Subject(s)
Electroencephalography/methods , Facial Recognition/physiology , Signal Processing, Computer-Assisted , Space Perception/physiology , Adult , Female , Humans , Male , Time Factors , Young Adult
18.
Neuroimage ; 134: 22-34, 2016 07 01.
Article in English | MEDLINE | ID: mdl-27063060

ABSTRACT

The human visual system is able to distinguish naturally occurring categories with exceptional speed and accuracy. At the same time, it exhibits substantial plasticity, permitting the seamless and fast learning of entirely novel categories. Here we investigate the interplay of these two processes by asking how category selectivity emerges and develops from initial to extended category learning. For this purpose, we combine a rapid event-related MEG adaptation paradigm, an extension of fMRI adaptation to high temporal resolution, a novel spatiotemporal analysis approach to separate adaptation effects from other effect origins, and source localization. The results demonstrate a spatiotemporal shift of cortical activity underlying category selectivity: after initial category acquisition, the onset of category selectivity was observed starting at 275ms together with stronger activity in prefrontal cortex. Following extensive training over 22 sessions, adding up to more than 16.600 trials, the earliest category effects occurred at a markedly shorter latency of 113ms and were accompanied by stronger occipitotemporal activity. Our results suggest that the brain balances plasticity and efficiency by relying on different mechanisms to recognize new and re-occurring categories.


Subject(s)
Brain Mapping/methods , Cerebral Cortex/physiology , Learning/physiology , Nerve Net/physiology , Neuronal Plasticity/physiology , Spatio-Temporal Analysis , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
19.
J Neurosci ; 35(50): 16398-403, 2015 Dec 16.
Article in English | MEDLINE | ID: mdl-26674865

ABSTRACT

Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. SIGNIFICANCE STATEMENT: Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation.


Subject(s)
Face , Occipital Lobe/physiology , Recognition, Psychology/physiology , Visual Perception/physiology , Adult , Eye Movements , Female , Functional Laterality/physiology , Humans , Male , Psychomotor Performance/physiology , Rotation , Transcranial Magnetic Stimulation , Visual Fields , Young Adult
20.
J Neurosci ; 32(34): 11763-72, 2012 Aug 22.
Article in English | MEDLINE | ID: mdl-22915118

ABSTRACT

Although the ability to recognize faces and objects from a variety of viewpoints is crucial to our everyday behavior, the underlying cortical mechanisms are not well understood. Recently, neurons in a face-selective region of the monkey temporal cortex were reported to be selective for mirror-symmetric viewing angles of faces as they were rotated in depth (Freiwald and Tsao, 2010). This property has been suggested to constitute a key computational step in achieving full view-invariance. Here, we measured functional magnetic resonance imaging activity in nine observers as they viewed upright or inverted faces presented at five different angles (-60, -30, 0, 30, and 60°). Using multivariate pattern analysis, we show that sensitivity to viewpoint mirror symmetry is widespread in the human visual system. The effect was observed in a large band of higher order visual areas, including the occipital face area, fusiform face area, lateral occipital cortex, mid fusiform, parahippocampal place area, and extending superiorly to encompass dorsal regions V3A/B and the posterior intraparietal sulcus. In contrast, early retinotopic regions V1-hV4 failed to exhibit sensitivity to viewpoint symmetry, as their responses could be largely explained by a computational model of low-level visual similarity. Our findings suggest that selectivity for mirror-symmetric viewing angles may constitute an intermediate-level processing step shared across multiple higher order areas of the ventral and dorsal streams, setting the stage for complete viewpoint-invariant representations at subsequent levels of visual processing.


Subject(s)
Brain Mapping , Face , Orientation , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Adult , Attention/physiology , Eye Movements , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Oxygen , Photic Stimulation , Principal Component Analysis , Statistics as Topic , Visual Cortex/blood supply , Visual Pathways/blood supply , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...