Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Nat Neurosci ; 25(6): 783-794, 2022 06.
Article in English | MEDLINE | ID: mdl-35668174

ABSTRACT

Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally complementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input-output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology
2.
Neural Comput ; 33(6): 1572-1615, 2021 05 13.
Article in English | MEDLINE | ID: mdl-34496384

ABSTRACT

An emerging paradigm proposes that neural computations can be understood at the level of dynamic systems that govern low-dimensional trajectories of collective neural activity. How the connectivity structure of a network determines the emergent dynamical system, however, remains to be clarified. Here we consider a novel class of models, gaussian-mixture, low-rank recurrent networks in which the rank of the connectivity matrix and the number of statistically defined populations are independent hyperparameters. We show that the resulting collective dynamics form a dynamical system, where the rank sets the dimensionality and the population structure shapes the dynamics. In particular, the collective dynamics can be described in terms of a simplified effective circuit of interacting latent variables. While having a single global population strongly restricts the possible dynamics, we demonstrate that if the number of populations is large enough, a rank R network can approximate any R-dimensional dynamical system.

3.
Neural Comput ; 31(12): 2324-2347, 2019 12.
Article in English | MEDLINE | ID: mdl-31614108

ABSTRACT

The way grid cells represent space in the rodent brain has been a striking discovery, with theoretical implications still unclear. Unlike hippocampal place cells, which are known to encode multiple, environment-dependent spatial maps, grid cells have been widely believed to encode space through a single low-dimensional manifold, in which coactivity relations between different neurons are preserved when the environment is changed. Does it have to be so? Here, we compute, using two alternative mathematical models, the storage capacity of a population of grid-like units, embedded in a continuous attractor neural network, for multiple spatial maps. We show that distinct representations of multiple environments can coexist, as existing models for grid cells have the potential to express several sets of hexagonal grid patterns, challenging the view of a universal grid map. This suggests that a population of grid cells can encode multiple noncongruent metric relationships, a feature that could in principle allow a grid-like code to represent environments with a variety of different geometries and possibly conceptual and cognitive spaces, which may be expected to entail such context-dependent metric relationships.


Subject(s)
Entorhinal Cortex/physiology , Grid Cells/physiology , Nerve Net/physiology , Space Perception/physiology , Animals , Computer Simulation , Neural Networks, Computer
4.
Nat Commun ; 8(1): 651, 2017 09 21.
Article in English | MEDLINE | ID: mdl-28935857

ABSTRACT

Animals continuously gather sensory cues to move towards favourable environments. Efficient goal-directed navigation requires sensory perception and motor commands to be intertwined in a feedback loop, yet the neural substrate underlying this sensorimotor task in the vertebrate brain remains elusive. Here, we combine virtual-reality behavioural assays, volumetric calcium imaging, optogenetic stimulation and circuit modelling to reveal the neural mechanisms through which a zebrafish performs phototaxis, i.e. actively orients towards a light source. Key to this process is a self-oscillating hindbrain population (HBO) that acts as a pacemaker for ocular saccades and controls the orientation of successive swim-bouts. It further integrates visual stimuli in a state-dependent manner, i.e. its response to visual inputs varies with the motor context, a mechanism that manifests itself in the phase-locked entrainment of the HBO by periodic stimuli. A rate model is developed that reproduces our observations and demonstrates how this sensorimotor processing eventually biases the animal trajectory towards bright regions.Active locomotion requires closed-loop sensorimotor co ordination between perception and action. Here the authors show using behavioural, imaging and modelling approaches that gaze orientation during phototaxis behaviour in larval zebrafish is related to oscillatory dynamics of a neuronal population in the hindbrain.


Subject(s)
Phototaxis/radiation effects , Zebrafish/physiology , Animals , Behavior, Animal/radiation effects , Larva/physiology , Larva/radiation effects , Light , Locomotion/radiation effects , Models, Biological , Neurons/physiology , Neurons/radiation effects , Rhombencephalon/physiology , Rhombencephalon/radiation effects
5.
Front Syst Neurosci ; 10: 14, 2016.
Article in English | MEDLINE | ID: mdl-26941620

ABSTRACT

Awake animals unceasingly perceive sensory inputs with great variability of nature and intensity, and understanding how the nervous system manages this continuous flow of diverse information to get a coherent representation of the environment is arguably a central question in systems neuroscience. Rheotaxis, the ability shared by most aquatic species to orient toward a current and swim to hold position, is an innate and robust multi-sensory behavior that is known to involve the lateral line and visual systems. To facilitate the neuroethological study of rheotaxic behavior in larval zebrafish we developed an assay for freely swimming larvae that allows for high experimental throughtput, large statistic and a fine description of the behavior. We show that there exist a clear transition from exploration to counterflow swim, and by changing the sensory modalities accessible to the fishes (visual only, lateral line only or both) and comparing the swim patterns at different ages we were able to detect and characterize two different mechanisms for position holding, one mediated by the lateral line and one mediated by the visual system. We also found that when both sensory modalities are accessible the visual system overshadows the lateral line, suggesting that at the larval stage the sensory inputs are not merged to finely tune the behavior but that redundant information pathways may be used as functional fallbacks.

6.
J Comput Neurosci ; 40(2): 157-75, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26852335

ABSTRACT

We study the memory performance of a class of modular attractor neural networks, where modules are potentially fully-connected networks connected to each other via diluted long-range connections. On this anatomical architecture we store memory patterns of activity using a Willshaw-type learning rule. P patterns are split in categories, such that patterns of the same category activate the same set of modules. We first compute the maximal storage capacity of these networks. We then investigate their error-correction properties through an exhaustive exploration of parameter space, and identify regions where the networks behave as an associative memory device. The crucial parameters that control the retrieval abilities of the network are (1) the ratio between the number of synaptic contacts of long- and short-range origins (2) the number of categories in which a module is activated and (3) the amount of local inhibition. We discuss the relationship between our model and networks of cortical patches that have been observed in different cortical areas.


Subject(s)
Cerebral Cortex/cytology , Memory/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Animals , Computer Simulation , Humans , Neural Networks, Computer , Nonlinear Dynamics
7.
PLoS Comput Biol ; 10(8): e1003727, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25101662

ABSTRACT

In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level [Formula: see text], in the large [Formula: see text] and sparse coding limits ([Formula: see text]). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.


Subject(s)
Memory/physiology , Models, Neurological , Nerve Net/physiology , Synapses/physiology , Animals , Cerebral Cortex/physiology , Computational Biology , Hippocampus/physiology , Neurons/physiology , Rats
SELECTION OF CITATIONS
SEARCH DETAIL
...