Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
ArXiv ; 2023 Oct 31.
Article in English | MEDLINE | ID: mdl-37961740

ABSTRACT

In natural vision, feedback connections support versatile visual inference capabilities such as making sense of the occluded or noisy bottom-up sensory information or mediating pure top-down processes such as imagination. However, the mechanisms by which the feedback pathway learns to give rise to these capabilities flexibly are not clear. We propose that top-down effects emerge through alignment between feedforward and feedback pathways, each optimizing its own objectives. To achieve this co-optimization, we introduce Feedback-Feedforward Alignment (FFA), a learning algorithm that leverages feedback and feedforward pathways as mutual credit assignment computational graphs, enabling alignment. In our study, we demonstrate the effectiveness of FFA in co-optimizing classification and reconstruction tasks on widely used MNIST and CIFAR10 datasets. Notably, the alignment mechanism in FFA endows feedback connections with emergent visual inference functions, including denoising, resolving occlusions, hallucination, and imagination. Moreover, FFA offers bio-plausibility compared to traditional back-propagation (BP) methods in implementation. By repurposing the computational graph of credit assignment into a goal-driven feedback pathway, FFA alleviates weight transport problems encountered in BP, enhancing the bio-plausibility of the learning algorithm. Our study presents FFA as a promising proof-of-concept for the mechanisms underlying how feedback connections in the visual cortex support flexible visual functions. This work also contributes to the broader field of visual inference underlying perceptual phenomena and has implications for developing more biologically inspired learning algorithms.

2.
iScience ; 26(1): 105788, 2023 Jan 20.
Article in English | MEDLINE | ID: mdl-36594035

ABSTRACT

Among the smallest simian primates, the common marmoset offers promise as an experimentally tractable primate model for neuroscience with translational potential to humans. However, given its exceedingly small brain and body, the gap in perceptual and cognitive abilities between marmosets and humans requires study. Here, we performed a comparison of marmoset behavior to that of three other species in the domain of high-level vision. We first found that marmosets outperformed rats - a marmoset-sized rodent - on a simple recognition task, with marmosets robustly recognizing objects across views. On a more challenging invariant object recognition task used previously in humans, marmosets also achieved high performance. Notably, across hundreds of images, marmosets' image-by-image behavior was highly similar to that of humans - nearly as human-like as macaque behavior. Thus, core aspects of visual perception are conserved across monkeys and humans, and marmosets present salient behavioral advantages over other small model organisms for visual neuroscience.

3.
Nat Neurosci ; 22(6): 974-983, 2019 06.
Article in English | MEDLINE | ID: mdl-31036945

ABSTRACT

Non-recurrent deep convolutional neural networks (CNNs) are currently the best at modeling core object recognition, a behavior that is supported by the densely recurrent primate ventral stream, culminating in the inferior temporal (IT) cortex. If recurrence is critical to this behavior, then primates should outperform feedforward-only deep CNNs for images that require additional recurrent processing beyond the feedforward IT response. Here we first used behavioral methods to discover hundreds of these 'challenge' images. Second, using large-scale electrophysiology, we observed that behaviorally sufficient object identity solutions emerged ~30 ms later in the IT cortex for challenge images compared with primate performance-matched 'control' images. Third, these behaviorally critical late-phase IT response patterns were poorly predicted by feedforward deep CNN activations. Notably, very-deep CNNs and shallower recurrent CNNs better predicted these late IT responses, suggesting that there is a functional equivalence between additional nonlinear transformations and recurrence. Beyond arguing that recurrent circuits are critical for rapid object identification, our results provide strong constraints for future recurrent model development.


Subject(s)
Neural Networks, Computer , Recognition, Psychology/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Animals , Humans , Macaca mulatta
4.
J Am Assoc Lab Anim Sci ; 58(1): 16-20, 2019 01 01.
Article in English | MEDLINE | ID: mdl-30538006

ABSTRACT

The typical daily water intake of common marmosets (Callithrix jacchus) in a research setting has not been well characterized. Because these New World primates are in demand as animal models for neurobehavioral experiments, which can include the potential use of fluid regulation for training, veterinary and research staff need to understand how marmosets keep hydrated under normal circumstances. In the current study, we measured the water consumption of older (age, 5 to 12 y; n = 11) and younger (age, 1 to 2 y; n = 11) marmosets every 3 h during the 12-h light phase in 2 different months (January and July). The overall daily water intake (mean ± 1 SD) was 61.3 ± 20.4 mL/kg (range, 36.3 to 99.0 mL/kg); water intake by an individual marmoset or cohoused pair was fairly consistent from day to day. Water intake did not change across the four 3-h periods measured during the day, and minimal water was consumed overnight when the room lights were off. In addition, daily water intake did not differ between the 2 mo of measurements. Older animals drank significantly more than the younger group, and weight was directly correlated with water intake. Water intake was not affected by body condition score or housing status. The variation in water consumption among marmosets underscores the need for individualization of fluid regulation guidelines.


Subject(s)
Animal Husbandry , Callithrix/physiology , Drinking , Aging , Animals , Body Weight , Circadian Rhythm , Female , Male
5.
Elife ; 72018 11 28.
Article in English | MEDLINE | ID: mdl-30484773

ABSTRACT

Ventral visual stream neural responses are dynamic, even for static image presentations. However, dynamical neural models of visual cortex are lacking as most progress has been made modeling static, time-averaged responses. Here, we studied population neural dynamics during face detection across three cortical processing stages. Remarkably,~30 milliseconds after the initially evoked response, we found that neurons in intermediate level areas decreased their responses to typical configurations of their preferred face parts relative to their response for atypical configurations even while neurons in higher areas achieved and maintained a preference for typical configurations. These hierarchical neural dynamics were inconsistent with standard feedforward circuits. Rather, recurrent models computing prediction errors between stages captured the observed temporal signatures. This model of neural dynamics, which simply augments the standard feedforward model of online vision, suggests that neural responses to static images may encode top-down prediction errors in addition to bottom-up feature estimates.


Subject(s)
Neurons/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Visual Perception/physiology , Animals , Brain Mapping , Face/physiology , Humans , Macaca mulatta/physiology , Models, Neurological , Photic Stimulation , Reaction Time/physiology
6.
J Neurosci ; 38(33): 7255-7269, 2018 08 15.
Article in English | MEDLINE | ID: mdl-30006365

ABSTRACT

Primates, including humans, can typically recognize objects in visual images at a glance despite naturally occurring identity-preserving image transformations (e.g., changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected more than one million behavioral trials from 1472 anonymous humans and five male macaque monkeys for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feedforward convolutional ANNs trained for visual categorization (termed DCNNIC models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNNIC models were significantly nonpredictive of primate performance and that this prediction failure was not accounted for by simple image attributes nor rescued by simple model modifications. These results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks such as those obtained here could serve as direct guides for discovering such models.SIGNIFICANCE STATEMENT Recently, specific feedforward deep convolutional artificial neural networks (ANNs) models have dramatically advanced our quantitative understanding of the neural mechanisms underlying primate core object recognition. In this work, we tested the limits of those ANNs by systematically comparing the behavioral responses of these models with the behavioral responses of humans and monkeys at the resolution of individual images. Using these high-resolution metrics, we found that all tested ANN models significantly diverged from primate behavior. Going forward, these high-resolution, large-scale primate behavioral benchmarks could serve as direct guides for discovering better ANN models of the primate visual system.


Subject(s)
Macaca mulatta/physiology , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Animals , Discrimination, Psychological/physiology , Humans , Male , Models, Neurological , Psychophysics , Species Specificity
7.
J Neurosci ; 36(50): 12729-12745, 2016 12 14.
Article in English | MEDLINE | ID: mdl-27810930

ABSTRACT

While early cortical visual areas contain fine scale spatial organization of neuronal properties, such as orientation preference, the spatial organization of higher-level visual areas is less well understood. The fMRI demonstration of face-preferring regions in human ventral cortex and monkey inferior temporal cortex ("face patches") raises the question of how neural selectivity for faces is organized. Here, we targeted hundreds of spatially registered neural recordings to the largest fMRI-identified face-preferring region in monkeys, the middle face patch (MFP), and show that the MFP contains a graded enrichment of face-preferring neurons. At its center, as much as 93% of the sites we sampled responded twice as strongly to faces than to nonface objects. We estimate the maximum neurophysiological size of the MFP to be ∼6 mm in diameter, consistent with its previously reported size under fMRI. Importantly, face selectivity in the MFP varied strongly even between neighboring sites. Additionally, extremely face-selective sites were ∼40 times more likely to be present inside the MFP than outside. These results provide the first direct quantification of the size and neural composition of the MFP by showing that the cortical tissue localized to the fMRI defined region consists of a very high fraction of face-preferring sites near its center, and a monotonic decrease in that fraction along any radial spatial axis. SIGNIFICANCE STATEMENT: The underlying organization of neurons that give rise to the large spatial regions of activity observed with fMRI is not well understood. Neurophysiological studies that have targeted the fMRI identified face patches in monkeys have provided evidence for both large-scale clustering and a heterogeneous spatial organization. Here we used a novel x-ray imaging system to spatially map the responses of hundreds of sites in and around the middle face patch. We observed that face-selective signal localized to the middle face patch was characterized by a gradual spatial enrichment. Furthermore, strongly face-selective sites were ∼40 times more likely to be found inside the patch than outside of the patch.


Subject(s)
Face , Recognition, Psychology/physiology , Temporal Lobe/physiology , Animals , Brain Mapping , Electrophysiological Phenomena/physiology , Female , Macaca mulatta , Magnetic Resonance Imaging , Male , Models, Neurological , Neurons/physiology , Photic Stimulation , Visual Cortex/physiology
8.
J Neurosci ; 33(38): 15207-19, 2013 Sep 18.
Article in English | MEDLINE | ID: mdl-24048850

ABSTRACT

Maps obtained by functional magnetic resonance imaging (fMRI) are thought to reflect the underlying spatial layout of neural activity. However, previous studies have not been able to directly compare fMRI maps to high-resolution neurophysiological maps, particularly in higher level visual areas. Here, we used a novel stereo microfocal x-ray system to localize thousands of neural recordings across monkey inferior temporal cortex (IT), construct large-scale maps of neuronal object selectivity at subvoxel resolution, and compare those neurophysiology maps with fMRI maps from the same subjects. While neurophysiology maps contained reliable structure at the sub-millimeter scale, fMRI maps of object selectivity contained information at larger scales (>2.5 mm) and were only partly correlated with raw neurophysiology maps collected in the same subjects. However, spatial smoothing of neurophysiology maps more than doubled that correlation, while a variety of alternative transforms led to no significant improvement. Furthermore, raw spiking signals, once spatially smoothed, were as predictive of fMRI maps as local field potential signals. Thus, fMRI of the inferior temporal lobe reflects a spatially low-passed version of neurophysiology signals. These findings strongly validate the widespread use of fMRI for detecting large (>2.5 mm) neuronal domains of object selectivity but show that a complete understanding of even the most pure domains (e.g., faces vs nonface objects) requires investigation at fine scales that can currently only be obtained with invasive neurophysiological methods.


Subject(s)
Action Potentials/physiology , Brain Mapping , Magnetic Resonance Imaging , Neurons/physiology , Temporal Lobe , Animals , Humans , Image Processing, Computer-Assisted , Macaca mulatta , Male , Oxygen/blood , Statistics, Nonparametric , Temporal Lobe/blood supply , Temporal Lobe/cytology , Temporal Lobe/physiology
9.
J Neurophysiol ; 109(11): 2732-8, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23486204

ABSTRACT

During sleep, changes in brain rhythms and neuromodulator levels in cortex modify the properties of individual neurons and the network as a whole. In principle, network-level interactions during sleep can be studied by observing covariation in spontaneous activity between neurons. Spontaneous activity, however, reflects only a portion of the effective functional connectivity that is activated by external and internal inputs (e.g., sensory stimulation, motor behavior, and mental activity), and it has been shown that neural responses are less correlated during external sensory stimulation than during spontaneous activity. Here, we took advantage of the unique property that the auditory cortex continues to respond to sounds during sleep and used external acoustic stimuli to activate cortical networks for studying neural interactions during sleep. We found that during slow-wave sleep (SWS), local (neuron-neuron) correlations are not reduced by acoustic stimulation remaining higher than in wakefulness and rapid eye movement sleep and remaining similar to spontaneous activity correlations. This high level of correlations during SWS complements previous work finding elevated global (local field potential-local field potential) correlations during sleep. Contrary to the prediction that slow oscillations in SWS would increase neural correlations during spontaneous activity, we found little change in neural correlations outside of periods of acoustic stimulation. Rather, these findings suggest that functional connections recruited in sound processing are modified during SWS and that slow rhythms, which in general are suppressed by sensory stimulation, are not the sole mechanism leading to elevated network correlations during sleep.


Subject(s)
Auditory Cortex/physiology , Neurons/physiology , Sleep, REM/physiology , Acoustic Stimulation , Animals , Auditory Cortex/cytology , Brain Waves , Callithrix , Female , Male , Nerve Net/cytology , Nerve Net/physiology , Wakefulness
10.
J Neurosci ; 32(47): 16666-82, 2012 Nov 21.
Article in English | MEDLINE | ID: mdl-23175821

ABSTRACT

Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of "face-selective" cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face-selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full-face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features-consistent with parts-based models-grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy.


Subject(s)
Eye , Face , Recognition, Psychology/physiology , Visual Perception/physiology , Algorithms , Animals , Electrodes , Female , Image Processing, Computer-Assisted , Linear Models , Macaca mulatta , Magnetic Resonance Imaging , Male , Neurons , Normal Distribution , Photic Stimulation , Retina/physiology , Visual Fields
11.
J Neurosci ; 31(8): 2965-73, 2011 Feb 23.
Article in English | MEDLINE | ID: mdl-21414918

ABSTRACT

How sounds are processed by the brain during sleep is an important question for understanding how we perceive the sensory environment in this unique behavioral state. While human behavioral data have indicated selective impairments of sound processing during sleep, brain imaging and neurophysiology studies have reported that overall neural activity in auditory cortex during sleep is surprisingly similar to that during wakefulness. This responsiveness to external stimuli leaves open the question of how neural responses during sleep differ, if at all, from wakefulness. Using extracellular neural recordings in the primary auditory cortex of naturally sleeping common marmosets, we show that slow-wave sleep (SWS) alters neural responses in the primate auditory cortex in two specific ways. SWS reduced the sensitivity of auditory cortex such that quiet sounds elicited weak responses in SWS compared with wakefulness, while loud sounds evoked similar responses in SWS and wakefulness. Furthermore, SWS reduced the extent of sound-evoked response suppression. This pattern of alterations was not observed during rapid eye movement sleep and could not be easily explained by the presence of slow rhythms in SWS. The alteration of excitatory and inhibitory responses during SWS suggests limitations in auditory processing and provides novel insights for understanding why certain sounds are processed while others are missed during deep sleep.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Sensory Receptor Cells/physiology , Sleep/physiology , Animals , Callithrix , Female , Male
12.
J Neurosci ; 28(53): 14467-80, 2008 Dec 31.
Article in English | MEDLINE | ID: mdl-19118181

ABSTRACT

Most sensory stimuli do not reach conscious perception during sleep. It has been thought that the thalamus prevents the relay of sensory information to cortex during sleep, but the consequences for cortical responses to sensory signals in this physiological state remain unclear. We recorded from two auditory cortical areas downstream of the thalamus in naturally sleeping marmoset monkeys. Single neurons in primary auditory cortex either increased or decreased their responses during sleep compared with wakefulness. In lateral belt, a secondary auditory cortical area, the response modulation was also bidirectional and showed no clear systematic depressive effect of sleep. When averaged across neurons, sound-evoked activity in these two auditory cortical areas was surprisingly well preserved during sleep. Neural responses to acoustic stimulation were present during both slow-wave and rapid-eye movement sleep, were repeatedly observed over multiple sleep cycles, and demonstrated similar discharge patterns to the responses recorded during wakefulness in the same neuron. Our results suggest that the thalamus is not as effective a gate for the flow of sensory information as previously thought. At the cortical stage, a novel pattern of activation/deactivation appears across neurons. Because the neural signal reaches as far as secondary auditory cortex, this leaves open the possibility of altered sensory processing of auditory information during sleep.


Subject(s)
Action Potentials/physiology , Auditory Cortex/physiology , Brain Mapping , Sleep/physiology , Acoustic Stimulation/methods , Animals , Auditory Cortex/cytology , Auditory Pathways , Behavior, Animal , Callithrix , Electroencephalography/methods , Electromyography/methods , Eye Movements/physiology , Functional Laterality , Neurons/physiology , Psychoacoustics , Statistics, Nonparametric , Wakefulness/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...