Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Neuron ; 111(17): 2742-2755.e4, 2023 09 06.
Article in English | MEDLINE | ID: mdl-37451264

ABSTRACT

Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model's internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.


Subject(s)
Models, Neurological , Retina , Retina/physiology , Neurons/physiology , Interneurons/physiology
2.
Proc Natl Acad Sci U S A ; 119(4)2022 01 25.
Article in English | MEDLINE | ID: mdl-35064086

ABSTRACT

Sensory receptive fields combine features that originate in different neural pathways. Retinal ganglion cell receptive fields compute intensity changes across space and time using a peripheral region known as the surround, a property that improves information transmission about natural scenes. The visual features that construct this fundamental property have not been quantitatively assigned to specific interneurons. Here, we describe a generalizable approach using simultaneous intracellular and multielectrode recording to directly measure and manipulate the sensory feature conveyed by a neural pathway to a downstream neuron. By directly controlling the gain of individual interneurons in the circuit, we show that rather than transmitting different temporal features, inhibitory horizontal cells and linear amacrine cells synchronously create the linear surround at different spatial scales and that these two components fully account for the surround. By analyzing a large population of ganglion cells, we observe substantial diversity in the relative contribution of amacrine and horizontal cell visual features while still allowing individual cells to increase information transmission under the statistics of natural scenes. Established theories of efficient coding have shown that optimal information transmission under natural scenes allows a diverse set of receptive fields. Our results give a mechanism for this theory, showing how distinct neural pathways synthesize a sensory computation and how this architecture both generates computational diversity and achieves the objective of high information transmission.


Subject(s)
Models, Biological , Retina/physiology , Visual Pathways , Algorithms , Amacrine Cells/metabolism , Interneurons/metabolism , Retinal Ganglion Cells/metabolism , Retinal Horizontal Cells/metabolism , Synaptic Transmission
3.
Article in English | MEDLINE | ID: mdl-38013729

ABSTRACT

The visual system processes stimuli over a wide range of spatiotemporal scales, with individual neurons receiving input from tens of thousands of neurons whose dynamics range from milliseconds to tens of seconds. This poses a challenge to create models that both accurately capture visual computations and are mechanistically interpretable. Here we present a model of salamander retinal ganglion cell spiking responses recorded with a multielectrode array that captures natural scene responses and slow adaptive dynamics. The model consists of a three-layer convolutional neural network (CNN) modified to include local recurrent synaptic dynamics taken from a linear-nonlinear-kinetic (LNK) model [1]. We presented alternating natural scenes and uniform field white noise stimuli designed to engage slow contrast adaptation. To overcome difficulties fitting slow and fast dynamics together, we first optimized all fast spatiotemporal parameters, then separately optimized recurrent slow synaptic parameters. The resulting full model reproduces a wide range of retinal computations and is mechanistically interpretable, having internal units that correspond to retinal interneurons with biophysically modeled synapses. This model allows us to study the contribution of model units to any retinal computation, and examine how long-term adaptation changes the retinal neural code for natural scenes through selective adaptation of retinal pathways.

4.
Adv Neural Inf Process Syst ; 32: 8537-8547, 2019 Dec.
Article in English | MEDLINE | ID: mdl-35283616

ABSTRACT

Recently, deep feedforward neural networks have achieved considerable success in modeling biological sensory processing, in terms of reproducing the input-output map of sensory neurons. However, such models raise profound questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (a biological circuit) with another (a deep network), without understanding either? Moreover, beyond neural representations, are the deep network's computational mechanisms for generating neural responses the same as those in the brain? Without a systematic approach to extracting and understanding computational mechanisms from deep neural network models, it can be difficult both to assess the degree of utility of deep learning approaches in neuroscience, and to extract experimentally testable hypotheses from deep networks. We develop such a systematic approach by combining dimensionality reduction and modern attribution methods for determining the relative importance of interneurons for specific visual computations. We apply this approach to deep network models of the retina, revealing a conceptual understanding of how the retina acts as a predictive feature extractor that signals deviations from expectations for diverse spatiotemporal stimuli. For each stimulus, our extracted computational mechanisms are consistent with prior scientific literature, and in one case yields a new mechanistic hypothesis. Thus overall, this work not only yields insights into the computational mechanisms underlying the striking predictive capabilities of the retina, but also places the framework of deep networks as neuroscientific models on firmer theoretical foundations, by providing a new roadmap to go beyond comparing neural representations to extracting and understand computational mechanisms.

5.
Adv Neural Inf Process Syst ; 29: 1369-1377, 2016.
Article in English | MEDLINE | ID: mdl-28729779

ABSTRACT

A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). An examination of the learned CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.

SELECTION OF CITATIONS
SEARCH DETAIL
...