Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Nat Neurosci ; 27(6): 1157-1166, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38684892

ABSTRACT

In natural vision, primates actively move their eyes several times per second via saccades. It remains unclear whether, during this active looking, visual neurons exhibit classical retinotopic properties, anticipate gaze shifts or mirror the stable quality of perception, especially in complex natural scenes. Here, we let 13 monkeys freely view thousands of natural images across 4.6 million fixations, recorded 883 h of neuronal responses in six areas spanning primary visual to anterior inferior temporal cortex and analyzed spatial, temporal and featural selectivity in these responses. Face neurons tracked their receptive field contents, indicated by category-selective responses. Self-consistency analysis showed that general feature-selective responses also followed eye movements and remained gaze-dependent over seconds of viewing the same image. Computational models of feature-selective responses located retinotopic receptive fields during free viewing. We found limited evidence for feature-selective predictive remapping and no viewing-history integration. Thus, ventral visual neurons represent the world in a predominantly eye-centered reference frame during natural vision.


Subject(s)
Eye Movements , Macaca mulatta , Neurons , Visual Cortex , Animals , Visual Cortex/physiology , Eye Movements/physiology , Neurons/physiology , Male , Photic Stimulation/methods , Visual Perception/physiology , Fixation, Ocular/physiology , Saccades/physiology , Vision, Ocular/physiology , Female
2.
PLoS Comput Biol ; 18(11): e1010654, 2022 11.
Article in English | MEDLINE | ID: mdl-36413523

ABSTRACT

Primates constantly explore their surroundings via saccadic eye movements that bring different parts of an image into high resolution. In addition to exploring new regions in the visual field, primates also make frequent return fixations, revisiting previously foveated locations. We systematically studied a total of 44,328 return fixations out of 217,440 fixations. Return fixations were ubiquitous across different behavioral tasks, in monkeys and humans, both when subjects viewed static images and when subjects performed natural behaviors. Return fixations locations were consistent across subjects, tended to occur within short temporal offsets, and typically followed a 180-degree turn in saccadic direction. To understand the origin of return fixations, we propose a proof-of-principle, biologically-inspired and image-computable neural network model. The model combines five key modules: an image feature extractor, bottom-up saliency cues, task-relevant visual features, finite inhibition-of-return, and saccade size constraints. Even though there are no free parameters that are fine-tuned for each specific task, species, or condition, the model produces fixation sequences resembling the universal properties of return fixations. These results provide initial steps towards a mechanistic understanding of the trade-off between rapid foveal recognition and the need to scrutinize previous fixation locations.


Subject(s)
Fixation, Ocular , Saccades , Animals , Humans , Visual Fields , Primates , Cues
3.
Proc Natl Acad Sci U S A ; 119(16): e2118705119, 2022 04 19.
Article in English | MEDLINE | ID: mdl-35377737

ABSTRACT

The primate inferior temporal cortex contains neurons that respond more strongly to faces than to other objects. Termed "face neurons," these neurons are thought to be selective for faces as a semantic category. However, face neurons also partly respond to clocks, fruits, and single eyes, raising the question of whether face neurons are better described as selective for visual features related to faces but dissociable from them. We used a recently described algorithm, XDream, to evolve stimuli that strongly activated face neurons. XDream leverages a generative neural network that is not limited to realistic objects. Human participants assessed images evolved for face neurons and for nonface neurons and natural images depicting faces, cars, fruits, etc. Evolved images were consistently judged to be distinct from real faces. Images evolved for face neurons were rated as slightly more similar to faces than images evolved for nonface neurons. There was a correlation among natural images between face neuron activity and subjective "faceness" ratings, but this relationship did not hold for face neuron­evolved images, which triggered high activity but were rated low in faceness. Our results suggest that so-called face neurons are better described as tuned to visual features rather than semantic categories.


Subject(s)
Neurons , Visual Cortex , Algorithms , Face , Humans , Neurons/physiology , Semantics , Visual Cortex/cytology , Visual Cortex/physiology
5.
PLoS Comput Biol ; 16(6): e1007973, 2020 06.
Article in English | MEDLINE | ID: mdl-32542056

ABSTRACT

A longstanding question in sensory neuroscience is what types of stimuli drive neurons to fire. The characterization of effective stimuli has traditionally been based on a combination of intuition, insights from previous studies, and luck. A new method termed XDream (EXtending DeepDream with real-time evolution for activation maximization) combined a generative neural network and a genetic algorithm in a closed loop to create strong stimuli for neurons in the macaque visual cortex. Here we extensively and systematically evaluate the performance of XDream. We use ConvNet units as in silico models of neurons, enabling experiments that would be prohibitive with biological neurons. We evaluated how the method compares to brute-force search, and how well the method generalizes to different neurons and processing stages. We also explored design and parameter choices. XDream can efficiently find preferred features for visual units without any prior knowledge about them. XDream extrapolates to different layers, architectures, and developmental regimes, performing better than brute-force search, and often better than exhaustive sampling of >1 million images. Furthermore, XDream is robust to choices of multiple image generators, optimization algorithms, and hyperparameters, suggesting that its performance is locally near-optimal. Lastly, we found no significant advantage to problem-specific parameter tuning. These results establish expectations and provide practical recommendations for using XDream to investigate neural coding in biological preparations. Overall, XDream is an efficient, general, and robust algorithm for uncovering neuronal tuning preferences using a vast and diverse stimulus space. XDream is implemented in Python, released under the MIT License, and works on Linux, Windows, and MacOS.


Subject(s)
Neurons/physiology , Photic Stimulation , Visual Cortex/physiology , Algorithms , Animals , Neural Networks, Computer , Visual Perception/physiology
6.
Cell ; 177(4): 999-1009.e10, 2019 05 02.
Article in English | MEDLINE | ID: mdl-31051108

ABSTRACT

What specific features should visual neurons encode, given the infinity of real-world images and the limited number of neurons available to represent them? We investigated neuronal selectivity in monkey inferotemporal cortex via the vast hypothesis space of a generative deep neural network, avoiding assumptions about features or semantic categories. A genetic algorithm searched this space for stimuli that maximized neuronal firing. This led to the evolution of rich synthetic images of objects with complex combinations of shapes, colors, and textures, sometimes resembling animals or familiar people, other times revealing novel patterns that did not map to any clear semantic category. These results expand our conception of the dictionary of features encoded in the cortex, and the approach can potentially reveal the internal representations of any system whose input can be captured by a generative model.


Subject(s)
Nerve Net/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Algorithms , Animals , Cerebral Cortex/physiology , Macaca mulatta/physiology , Male , Neurons/metabolism , Neurons/physiology
7.
Mol Ther Methods Clin Dev ; 5: 31-42, 2017 Jun 16.
Article in English | MEDLINE | ID: mdl-28480302

ABSTRACT

Loss-of-function mutations in the Fukutin-related protein (FKRP) gene cause limb-girdle muscular dystrophy type 2I (LGMD2I) and other forms of congenital muscular dystrophy-dystroglycanopathy that are associated with glycosylation defects in the α-dystroglycan (α-DG) protein. Systemic administration of a single dose of recombinant adeno-associated virus serotype 9 (AAV9) vector expressing human FKRP to a mouse model of LGMD2I at various stages of disease progression was evaluated. The results demonstrate rescue of functional glycosylation of α-DG and muscle function, along with improvements in muscle structure at all disease stages versus age-matched untreated cohorts. Nevertheless, mice treated in the latter stages of disease progression revealed a decrease in beneficial effects of the treatment. The results provide a proof of concept for future clinical trials in patients with FKRP-related muscular dystrophy and demonstrate that AAV-mediated gene therapy can potentially benefit patients at all stages of disease progression, but earlier intervention would be highly preferred.

SELECTION OF CITATIONS
SEARCH DETAIL
...