Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
2.
Neuron ; 111(17): 2742-2755.e4, 2023 09 06.
Article in English | MEDLINE | ID: mdl-37451264

ABSTRACT

Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model's internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.


Subject(s)
Models, Neurological , Retina , Retina/physiology , Neurons/physiology , Interneurons/physiology
4.
Article in English | MEDLINE | ID: mdl-38013729

ABSTRACT

The visual system processes stimuli over a wide range of spatiotemporal scales, with individual neurons receiving input from tens of thousands of neurons whose dynamics range from milliseconds to tens of seconds. This poses a challenge to create models that both accurately capture visual computations and are mechanistically interpretable. Here we present a model of salamander retinal ganglion cell spiking responses recorded with a multielectrode array that captures natural scene responses and slow adaptive dynamics. The model consists of a three-layer convolutional neural network (CNN) modified to include local recurrent synaptic dynamics taken from a linear-nonlinear-kinetic (LNK) model [1]. We presented alternating natural scenes and uniform field white noise stimuli designed to engage slow contrast adaptation. To overcome difficulties fitting slow and fast dynamics together, we first optimized all fast spatiotemporal parameters, then separately optimized recurrent slow synaptic parameters. The resulting full model reproduces a wide range of retinal computations and is mechanistically interpretable, having internal units that correspond to retinal interneurons with biophysically modeled synapses. This model allows us to study the contribution of model units to any retinal computation, and examine how long-term adaptation changes the retinal neural code for natural scenes through selective adaptation of retinal pathways.

5.
Neuron ; 105(2): 246-259.e8, 2020 01 22.
Article in English | MEDLINE | ID: mdl-31786013

ABSTRACT

Though the temporal precision of neural computation has been studied intensively, a data-driven determination of this precision remains a fundamental challenge. Reproducible spike patterns may be obscured on single trials by uncontrolled temporal variability in behavior and cognition and may not be time locked to measurable signatures in behavior or local field potentials (LFP). To overcome these challenges, we describe a general-purpose time warping framework that reveals precise spike-time patterns in an unsupervised manner, even when these patterns are decoupled from behavior or are temporally stretched across single trials. We demonstrate this method across diverse systems: cued reaching in nonhuman primates, motor sequence production in rats, and olfaction in mice. This approach flexibly uncovers diverse dynamical firing patterns, including pulsatile responses to behavioral events, LFP-aligned oscillatory spiking, and even unanticipated patterns, such as 7 Hz oscillations in rat motor cortex that are not time locked to measured behaviors or LFP.


Subject(s)
Action Potentials/physiology , Neurons/physiology , Pattern Recognition, Automated/methods , Amyloid beta-Protein Precursor/genetics , Animals , Gene Knock-In Techniques , Macaca mulatta , Male , Mice , Mice, Transgenic , Microinjections , Motor Cortex/physiology , Peptide Fragments/genetics , Primary Cell Culture , Proteins/genetics , Rats , Time Factors
6.
Adv Neural Inf Process Syst ; 32: 8537-8547, 2019 Dec.
Article in English | MEDLINE | ID: mdl-35283616

ABSTRACT

Recently, deep feedforward neural networks have achieved considerable success in modeling biological sensory processing, in terms of reproducing the input-output map of sensory neurons. However, such models raise profound questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (a biological circuit) with another (a deep network), without understanding either? Moreover, beyond neural representations, are the deep network's computational mechanisms for generating neural responses the same as those in the brain? Without a systematic approach to extracting and understanding computational mechanisms from deep neural network models, it can be difficult both to assess the degree of utility of deep learning approaches in neuroscience, and to extract experimentally testable hypotheses from deep networks. We develop such a systematic approach by combining dimensionality reduction and modern attribution methods for determining the relative importance of interneurons for specific visual computations. We apply this approach to deep network models of the retina, revealing a conceptual understanding of how the retina acts as a predictive feature extractor that signals deviations from expectations for diverse spatiotemporal stimuli. For each stimulus, our extracted computational mechanisms are consistent with prior scientific literature, and in one case yields a new mechanistic hypothesis. Thus overall, this work not only yields insights into the computational mechanisms underlying the striking predictive capabilities of the retina, but also places the framework of deep networks as neuroscientific models on firmer theoretical foundations, by providing a new roadmap to go beyond comparing neural representations to extracting and understand computational mechanisms.

7.
Adv Neural Inf Process Syst ; 2019: 15629-15641, 2019 Dec.
Article in English | MEDLINE | ID: mdl-32782422

ABSTRACT

Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely on representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold-the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics-often appears universal across all architectures.

8.
Adv Neural Inf Process Syst ; 32: 15696-15705, 2019 Dec.
Article in English | MEDLINE | ID: mdl-32782423

ABSTRACT

Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it-to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.

9.
PLoS Comput Biol ; 14(8): e1006291, 2018 08.
Article in English | MEDLINE | ID: mdl-30138312

ABSTRACT

A central challenge in sensory neuroscience involves understanding how neural circuits shape computations across cascaded cell layers. Here we attempt to reconstruct the response properties of experimentally unobserved neurons in the interior of a multilayered neural circuit, using cascaded linear-nonlinear (LN-LN) models. We combine non-smooth regularization with proximal consensus algorithms to overcome difficulties in fitting such models that arise from the high dimensionality of their parameter space. We apply this framework to retinal ganglion cell processing, learning LN-LN models of retinal circuitry consisting of thousands of parameters, using 40 minutes of responses to white noise. Our models demonstrate a 53% improvement in predicting ganglion cell spikes over classical linear-nonlinear (LN) models. Internal nonlinear subunits of the model match properties of retinal bipolar cells in both receptive field structure and number. Subunits have consistently high thresholds, supressing all but a small fraction of inputs, leading to sparse activity patterns in which only one subunit drives ganglion cell spiking at any time. From the model's parameters, we predict that the removal of visual redundancies through stimulus decorrelation across space, a central tenet of efficient coding theory, originates primarily from bipolar cell synapses. Furthermore, the composite nonlinear computation performed by retinal circuitry corresponds to a boolean OR function applied to bipolar cell feature detectors. Our methods are statistically and computationally efficient, enabling us to rapidly learn hierarchical non-linear models as well as efficiently compute widely used descriptive statistics such as the spike triggered average (STA) and covariance (STC) for high dimensional stimuli. This general computational framework may aid in extracting principles of nonlinear hierarchical sensory processing across diverse modalities from limited data.


Subject(s)
Nerve Net/physiology , Retinal Ganglion Cells/physiology , Action Potentials/physiology , Algorithms , Ambystoma/physiology , Animals , Models, Neurological , Models, Theoretical , Nonlinear Dynamics , Photic Stimulation , Retina/physiology
10.
Neuron ; 95(4): 955-970.e4, 2017 Aug 16.
Article in English | MEDLINE | ID: mdl-28757304

ABSTRACT

How environmental and physiological signals interact to influence neural circuits underlying developmentally programmed social interactions such as male territorial aggression is poorly understood. We have tested the influence of sensory cues, social context, and sex hormones on progesterone receptor (PR)-expressing neurons in the ventromedial hypothalamus (VMH) that are critical for male territorial aggression. We find that these neurons can drive aggressive displays in solitary males independent of pheromonal input, gonadal hormones, opponents, or social context. By contrast, these neurons cannot elicit aggression in socially housed males that intrude in another male's territory unless their pheromone-sensing is disabled. This modulation of aggression cannot be accounted for by linear integration of environmental and physiological signals. Together, our studies suggest that fundamentally non-linear computations enable social context to exert a dominant influence on developmentally hard-wired hypothalamus-mediated male territorial aggression.


Subject(s)
Aggression/physiology , Hypothalamus/cytology , Hypothalamus/physiology , Neurons/physiology , Social Behavior , Action Potentials/drug effects , Action Potentials/genetics , Adenoviridae/genetics , Animals , Antipsychotic Agents/pharmacology , Clozapine/analogs & derivatives , Clozapine/pharmacology , Cyclic Nucleotide-Gated Cation Channels/genetics , Cyclic Nucleotide-Gated Cation Channels/metabolism , Female , In Vitro Techniques , Luminescent Proteins/genetics , Luminescent Proteins/metabolism , Male , Mice, Inbred C57BL , Mice, Transgenic , Neurons/drug effects , Patch-Clamp Techniques , Receptors, Progesterone/genetics , Receptors, Progesterone/metabolism , Sex Factors , TRPC Cation Channels/genetics , TRPC Cation Channels/metabolism
11.
Neuron ; 94(2): 375-387.e7, 2017 Apr 19.
Article in English | MEDLINE | ID: mdl-28392071

ABSTRACT

Medial entorhinal grid cells display strikingly symmetric spatial firing patterns. The clarity of these patterns motivated the use of specific activity pattern shapes to classify entorhinal cell types. While this approach successfully revealed cells that encode boundaries, head direction, and running speed, it left a majority of cells unclassified, and its pre-defined nature may have missed unconventional, yet important coding properties. Here, we apply an unbiased statistical approach to search for cells that encode navigationally relevant variables. This approach successfully classifies the majority of entorhinal cells and reveals unsuspected entorhinal coding principles. First, we find a high degree of mixed selectivity and heterogeneity in superficial entorhinal neurons. Second, we discover a dynamic and remarkably adaptive code for space that enables entorhinal cells to rapidly encode navigational information accurately at high running speeds. Combined, these observations advance our current understanding of the mechanistic origins and functional implications of the entorhinal code for navigation. VIDEO ABSTRACT.


Subject(s)
Action Potentials/physiology , Entorhinal Cortex/physiology , Neurons/physiology , Space Perception/physiology , Theta Rhythm/physiology , Animals , Female , Head , Male , Mice, Inbred C57BL , Models, Neurological , Motor Activity/physiology
12.
Adv Neural Inf Process Syst ; 29: 1369-1377, 2016.
Article in English | MEDLINE | ID: mdl-28729779

ABSTRACT

A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). An examination of the learned CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.

13.
Article in English | MEDLINE | ID: mdl-22514531

ABSTRACT

Experimental studies of neuronal cultures have revealed a wide variety of spiking network activity ranging from sparse, asynchronous firing to distinct, network-wide synchronous bursting. However, the functional mechanisms driving these observed firing patterns are not well understood. In this work, we develop an in silico network of cortical neurons based on known features of similar in vitro networks. The activity from these simulations is found to closely mimic experimental data. Furthermore, the strength or degree of network bursting is found to depend on a few parameters: the density of the culture, the type of synaptic connections, and the ratio of excitatory to inhibitory connections. Network bursting gradually becomes more prominent as either the density, the fraction of long range connections, or the fraction of excitatory neurons is increased. Interestingly, biologically prevalent values of parameters result in networks that are at the transition between strong bursting and sparse firing. Using principal components analysis, we show that a large fraction of the variance in firing rates is captured by the first component for bursting networks. These results have implications for understanding how information is encoded at the population level as well as for why certain network parameters are ubiquitous in cortical tissue.

SELECTION OF CITATIONS
SEARCH DETAIL
...