Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Neurosci ; 30(6): 2102-14, 2010 Feb 10.
Article in English | MEDLINE | ID: mdl-20147538

ABSTRACT

Area V2 is a major visual processing stage in mammalian visual cortex, but little is currently known about how V2 encodes information during natural vision. To determine how V2 represents natural images, we used a novel nonlinear system identification approach to obtain quantitative estimates of spatial tuning across a large sample of V2 neurons. We compared these tuning estimates with those obtained in area V1, in which the neural code is relatively well understood. We find two subpopulations of neurons in V2. Approximately one-half of the V2 neurons have tuning that is similar to V1. The other half of the V2 neurons are selective for complex features such as those that occur in natural scenes. These neurons are distinguished from V1 neurons mainly by the presence of stronger suppressive tuning. Selectivity in these neurons therefore reflects a balance between excitatory and suppressive tuning for specific features. These results provide a new perspective on how complex shape selectivity arises, emphasizing the role of suppressive tuning in determining stimulus selectivity in higher visual cortex.


Subject(s)
Neurons/physiology , Visual Cortex/physiology , Visual Perception , Animals , Cluster Analysis , Electrophysiology , Macaca mulatta , Male , Models, Neurological
2.
Neuron ; 63(6): 902-15, 2009 Sep 24.
Article in English | MEDLINE | ID: mdl-19778517

ABSTRACT

Recent studies have used fMRI signals from early visual areas to reconstruct simple geometric patterns. Here, we demonstrate a new Bayesian decoder that uses fMRI signals from early and anterior visual areas to reconstruct complex natural images. Our decoder combines three elements: a structural encoding model that characterizes responses in early visual areas, a semantic encoding model that characterizes responses in anterior visual areas, and prior information about the structure and semantic content of natural images. By combining all these elements, the decoder produces reconstructions that accurately reflect both the spatial structure and semantic category of the objects contained in the observed natural image. Our results show that prior information has a substantial effect on the quality of natural image reconstructions. We also demonstrate that much of the variance in the responses of anterior visual areas to complex natural images is explained by the semantic category of the image alone.


Subject(s)
Bayes Theorem , Brain Mapping , Image Processing, Computer-Assisted/methods , Models, Neurological , Visual Cortex/anatomy & histology , Humans , Magnetic Resonance Imaging/methods , Oxygen/blood , Photic Stimulation/methods , Psychophysics , Semantics , Visual Cortex/blood supply , Visual Cortex/physiology , Visual Perception
3.
Nature ; 452(7185): 352-5, 2008 Mar 20.
Article in English | MEDLINE | ID: mdl-18322462

ABSTRACT

A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation, position and object category from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person's visual experience from measurements of brain activity alone.


Subject(s)
Brain Mapping/methods , Brain/physiology , Visual Perception/physiology , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Nature , Photic Stimulation , Photography , Research Design
4.
Neural Comput ; 20(6): 1537-64, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18194102

ABSTRACT

We describe the Berkeley wavelet transform (BWT), a two-dimensional triadic wavelet transform. The BWT comprises four pairs of mother wavelets at four orientations. Within each pair, one wavelet has odd symmetry, and the other has even symmetry. By translation and scaling of the whole set (plus a single constant term), the wavelets form a complete, orthonormal basis in two dimensions. The BWT shares many characteristics with the receptive fields of neurons in mammalian primary visual cortex (V1). Like these receptive fields, BWT wavelets are localized in space, tuned in spatial frequency and orientation, and form a set that is approximately scale invariant. The wavelets also have spatial frequency and orientation bandwidths that are comparable with biological values. Although the classical Gabor wavelet model is a more accurate description of the receptive fields of individual V1 neurons, the BWT has some interesting advantages. It is a complete, orthonormal basis and is therefore inexpensive to compute, manipulate, and invert. These properties make the BWT useful in situations where computational power or experimental data are limited, such as estimation of the spatiotemporal receptive fields of neurons.


Subject(s)
Models, Neurological , Neurons/physiology , Orientation , Signal Processing, Computer-Assisted , Visual Perception/physiology , Algorithms , Animals , Photic Stimulation/methods , Visual Cortex/cytology , Visual Fields
5.
Hum Brain Mapp ; 29(2): 142-56, 2008 Feb.
Article in English | MEDLINE | ID: mdl-17394212

ABSTRACT

Functional magnetic resonance imaging (fMRI) suffers from many problems that make signal estimation difficult. These include variation in the hemodynamic response across voxels and low signal-to-noise ratio (SNR). We evaluate several analysis techniques that address these problems for event-related fMRI. (1) Many fMRI analyses assume a canonical hemodynamic response function, but this assumption may lead to inaccurate data models. By adopting the finite impulse response model, we show that voxel-specific hemodynamic response functions can be estimated directly from the data. (2) There is a large amount of low-frequency noise fluctuation (LFF) in blood oxygenation level dependent (BOLD) time-series data. To compensate for this problem, we use polynomials as regressors for LFF. We show that this technique substantially improves SNR and is more accurate than high-pass filtering of the data. (3) Model overfitting is a problem for the finite impulse response model because of the low SNR of the BOLD response. To reduce overfitting, we estimate a hemodynamic response timecourse for each voxel and incorporate the constraint of time-event separability, the constraint that hemodynamic responses across event types are identical up to a scale factor. We show that this technique substantially improves the accuracy of hemodynamic response estimates and can be computed efficiently. For the analysis techniques we present, we evaluate improvement in modeling accuracy via 10-fold cross-validation.


Subject(s)
Artifacts , Brain Mapping , Brain/blood supply , Magnetic Resonance Imaging , Models, Neurological , Models, Theoretical , Brain/physiology , Cerebrovascular Circulation/physiology , Humans , Photic Stimulation , Time
SELECTION OF CITATIONS
SEARCH DETAIL
...