Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
Add more filters










Publication year range
1.
Neuron ; 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-39013467

ABSTRACT

Every day, hundreds of thousands of people undergo general anesthesia. One hypothesis is that anesthesia disrupts dynamic stability-the ability of the brain to balance excitability with the need to be stable and controllable. To test this hypothesis, we developed a method for quantifying changes in population-level dynamic stability in complex systems: delayed linear analysis for stability estimation (DeLASE). Propofol was used to transition animals between the awake state and anesthetized unconsciousness. DeLASE was applied to macaque cortex local field potentials (LFPs). We found that neural dynamics were more unstable in unconsciousness compared with the awake state. Cortical trajectories mirrored predictions from destabilized linear systems. We mimicked the effect of propofol in simulated neural networks by increasing inhibitory tone. This in turn destabilized the networks, as observed in the neural data. Our results suggest that anesthesia disrupts dynamical stability that is required for consciousness.

2.
Annu Rev Neurosci ; 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38684081

ABSTRACT

The activity patterns of grid cells form distinctively regular triangular lattices over the explored spatial environment and are largely invariant to visual stimuli, animal movement, and environment geometry. These neurons present numerous fascinating challenges to the curious (neuro)scientist: What are the circuit mechanisms responsible for creating spatially periodic activity patterns from the monotonic input-output responses of single neurons? How and why does the brain encode a local, nonperiodic variable-the allocentric position of the animal-with a periodic, nonlocal code? And, are grid cells truly specialized for spatial computations? Otherwise, what is their role in general cognition more broadly? We review efforts in uncovering the mechanisms and functional properties of grid cells, highlighting recent progress in the experimental validation of mechanistic grid cell models, and discuss the coding properties and functional advantages of the grid code as suggested by continuous attractor network models of grid cells.

3.
Neural Comput ; 35(11): 1850-1869, 2023 Oct 10.
Article in English | MEDLINE | ID: mdl-37725708

ABSTRACT

Recurrent neural networks (RNNs) are often used to model circuits in the brain and can solve a variety of difficult computational problems requiring memory, error correction, or selection (Hopfield, 1982; Maass et al., 2002; Maass, 2011). However, fully connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (about 0.1%). Motivated by the neocortex, where neural connectivity is constrained by physical distance along cortical sheets and other synaptic wiring costs, we introduce locality masked RNNs (LM-RNNs) that use task-agnostic predetermined graphs with sparsity as low as 4%. We study LM-RNNs in a multitask learning setting relevant to cognitive systems neuroscience with a commonly used set of tasks, 20-Cog-tasks (Yang et al., 2019). We show through reductio ad absurdum that 20-Cog-tasks can be solved by a small pool of separated autapses that we can mechanistically analyze and understand. Thus, these tasks fall short of the goal of inducing complex recurrent dynamics and modular structure in RNNs. We next contribute a new cognitive multitask battery, Mod-Cog, consisting of up to 132 tasks that expands by about seven-fold the number of tasks and task complexity of 20-Cog-tasks. Importantly, while autapses can solve the simple 20-Cog-tasks, the expanded task set requires richer neural architectures and continuous attractor dynamics. On these tasks, we show that LM-RNNs with an optimal sparsity result in faster training and better data efficiency than fully connected networks.

4.
Nat Rev Neurosci ; 23(12): 744-766, 2022 12.
Article in English | MEDLINE | ID: mdl-36329249

ABSTRACT

In this Review, we describe the singular success of attractor neural network models in describing how the brain maintains persistent activity states for working memory, corrects errors and integrates noisy cues. We consider the mechanisms by which simple and forgetful units can organize to collectively generate dynamics on the long timescales required for such computations. We discuss the myriad potential uses of attractor dynamics for computation in the brain, and showcase notable examples of brain systems in which inherently low-dimensional continuous-attractor dynamics have been concretely and rigorously identified. Thus, it is now possible to conclusively state that the brain constructs and uses such systems for computation. Finally, we highlight recent theoretical advances in understanding how the fundamental trade-offs between robustness and capacity and between structure and flexibility can be overcome by reusing and recombining the same set of modular attractors for multiple functions, so they together produce representations that are structurally constrained and robust but exhibit high capacity and are flexible.


Subject(s)
Brain , Neurons , Humans , Neural Networks, Computer , Memory, Short-Term , Models, Neurological
5.
Nature ; 608(7923): 586-592, 2022 08.
Article in English | MEDLINE | ID: mdl-35859170

ABSTRACT

The ability to associate temporally segregated information and assign positive or negative valence to environmental cues is paramount for survival. Studies have shown that different projections from the basolateral amygdala (BLA) are potentiated following reward or punishment learning1-7. However, we do not yet understand how valence-specific information is routed to the BLA neurons with the appropriate downstream projections, nor do we understand how to reconcile the sub-second timescales of synaptic plasticity8-11 with the longer timescales separating the predictive cues from their outcomes. Here we demonstrate that neurotensin (NT)-expressing neurons in the paraventricular nucleus of the thalamus (PVT) projecting to the BLA (PVT-BLA:NT) mediate valence assignment by exerting NT concentration-dependent modulation in BLA during associative learning. We found that optogenetic activation of the PVT-BLA:NT projection promotes reward learning, whereas PVT-BLA projection-specific knockout of the NT gene (Nts) augments punishment learning. Using genetically encoded calcium and NT sensors, we further revealed that both calcium dynamics within the PVT-BLA:NT projection and NT concentrations in the BLA are enhanced after reward learning and reduced after punishment learning. Finally, we showed that CRISPR-mediated knockout of the Nts gene in the PVT-BLA pathway blunts BLA neural dynamics and attenuates the preference for active behavioural strategies to reward and punishment predictive cues. In sum, we have identified NT as a neuropeptide that signals valence in the BLA, and showed that NT is a critical neuromodulator that orchestrates positive and negative valence assignment in amygdala neurons by extending valence-specific plasticity to behaviourally relevant timescales.


Subject(s)
Basolateral Nuclear Complex , Learning , Neural Pathways , Neurotensin , Punishment , Reward , Basolateral Nuclear Complex/cytology , Basolateral Nuclear Complex/physiology , Calcium/metabolism , Cues , Neuronal Plasticity , Neurotensin/metabolism , Optogenetics , Thalamic Nuclei/cytology , Thalamic Nuclei/physiology
6.
Nature ; 603(7902): 667-671, 2022 03.
Article in English | MEDLINE | ID: mdl-35296862

ABSTRACT

Most social species self-organize into dominance hierarchies1,2, which decreases aggression and conserves energy3,4, but it is not clear how individuals know their social rank. We have only begun to learn how the brain represents social rank5-9 and guides behaviour on the basis of this representation. The medial prefrontal cortex (mPFC) is involved in social dominance in rodents7,8 and humans10,11. Yet, precisely how the mPFC encodes relative social rank and which circuits mediate this computation is not known. We developed a social competition assay in which mice compete for rewards, as well as a computer vision tool (AlphaTracker) to track multiple, unmarked animals. A hidden Markov model combined with generalized linear models was able to decode social competition behaviour from mPFC ensemble activity. Population dynamics in the mPFC predicted social rank and competitive success. Finally, we demonstrate that mPFC cells that project to the lateral hypothalamus promote dominance behaviour during reward competition. Thus, we reveal a cortico-hypothalamic circuit by which the mPFC exerts top-down modulation of social dominance.


Subject(s)
Hypothalamus , Prefrontal Cortex , Animals , Hypothalamic Area, Lateral , Mice , Reward , Social Behavior
7.
Elife ; 102021 05 24.
Article in English | MEDLINE | ID: mdl-34028354

ABSTRACT

What factors constrain the arrangement of the multiple fields of a place cell? By modeling place cells as perceptrons that act on multiscale periodic grid-cell inputs, we analytically enumerate a place cell's repertoire - how many field arrangements it can realize without external cues while its grid inputs are unique - and derive its capacity - the spatial range over which it can achieve any field arrangement. We show that the repertoire is very large and relatively noise-robust. However, the repertoire is a vanishing fraction of all arrangements, while capacity scales only as the sum of the grid periods so field arrangements are constrained over larger distances. Thus, grid-driven place field arrangements define a large response scaffold that is strongly constrained by its structured inputs. Finally, we show that altering grid-place weights to generate an arbitrary new place field strongly affects existing arrangements, which could explain the volatility of the place code.


Subject(s)
Cues , Hippocampus/physiology , Models, Neurological , Place Cells/physiology , Space Perception , Animals , Computer Simulation , Hippocampus/cytology , Humans , Neural Networks, Computer , Neuronal Plasticity , Numerical Analysis, Computer-Assisted
8.
Proc Natl Acad Sci U S A ; 117(41): 25505-25516, 2020 10 13.
Article in English | MEDLINE | ID: mdl-33008882

ABSTRACT

An elemental computation in the brain is to identify the best in a set of options and report its value. It is required for inference, decision-making, optimization, action selection, consensus, and foraging. Neural computing is considered powerful because of its parallelism; however, it is unclear whether neurons can perform this max-finding operation in a way that improves upon the prohibitively slow optimal serial max-finding computation (which takes [Formula: see text] time for N noisy candidate options) by a factor of N, the benchmark for parallel computation. Biologically plausible architectures for this task are winner-take-all (WTA) networks, where individual neurons inhibit each other so only those with the largest input remain active. We show that conventional WTA networks fail the parallelism benchmark and, worse, in the presence of noise, altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or rescaling as N varies, the nWTA network achieves the parallelism benchmark. The network reproduces experimentally observed phenomena like Hick's law without needing an additional readout stage or adaptive N-dependent thresholds. Our work bridges scales by linking cellular nonlinearities to circuit-level decision-making, establishes that distributed computation saturating the parallelism benchmark is possible in networks of noisy, finite-memory neurons, and shows that Hick's law may be a symptom of near-optimal parallel decision-making with noisy input.


Subject(s)
Decision Making/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Nerve Net/physiology , Nonlinear Dynamics
9.
Nat Neurosci ; 23(10): 1286-1296, 2020 10.
Article in English | MEDLINE | ID: mdl-32895567

ABSTRACT

Understanding the mechanisms of neural computation and learning will require knowledge of the underlying circuitry. Because it is difficult to directly measure the wiring diagrams of neural circuits, there has long been an interest in estimating them algorithmically from multicell activity recordings. We show that even sophisticated methods, applied to unlimited data from every cell in the circuit, are biased toward inferring connections between unconnected but highly correlated neurons. This failure to 'explain away' connections occurs when there is a mismatch between the true network dynamics and the model used for inference, which is inevitable when modeling the real world. Thus, causal inference suffers when variables are highly correlated, and activity-based estimates of connectivity should be treated with special caution in strongly connected networks. Finally, performing inference on the activity of circuits pushed far out of equilibrium by a simple low-dimensional suppressive drive might ameliorate inference bias.


Subject(s)
Action Potentials , Brain/anatomy & histology , Brain/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Animals , Data Analysis , Humans , Neural Pathways/anatomy & histology , Neural Pathways/physiology
10.
Nat Neurosci ; 22(4): 609-617, 2019 04.
Article in English | MEDLINE | ID: mdl-30911183

ABSTRACT

Continuous-attractor network models of grid formation posit that recurrent connectivity between grid cells controls their patterns of co-activation. Grid cells from a common module exhibit stable offsets in their periodic spatial tuning curves across environments, and this may reflect recurrent connectivity or correlated sensory inputs. Here we explore whether cell-cell relationships predicted by attractor models persist during sleep states in which spatially informative sensory inputs are absent. We recorded ensembles of grid cells in superficial layers of medial entorhinal cortex during active exploratory behaviors and overnight sleep. Per grid cell pair and collectively, and across waking, rapid eye movement sleep and non-rapid eye movement sleep, we found preserved patterns of spike-time correlations that reflected the spatial tuning offsets between these grid cells during active exploration. The preservation of cell-cell relationships across waking and sleep states was not explained by theta oscillations or activity in hippocampal subregion CA1. These results indicate that recurrent connectivity within the grid cell network drives grid cell activity across behavioral states.


Subject(s)
Entorhinal Cortex/physiology , Grid Cells/physiology , Sleep , Spatial Processing/physiology , Action Potentials , Animals , CA1 Region, Hippocampal/physiology , Exploratory Behavior , Male , Models, Neurological , Motor Activity , Rats, Long-Evans
11.
Cell ; 175(3): 736-750.e30, 2018 10 18.
Article in English | MEDLINE | ID: mdl-30270041

ABSTRACT

How the topography of neural circuits relates to their function remains unclear. Although topographic maps exist for sensory and motor variables, they are rarely observed for cognitive variables. Using calcium imaging during virtual navigation, we investigated the relationship between the anatomical organization and functional properties of grid cells, which represent a cognitive code for location during navigation. We found a substantial degree of grid cell micro-organization in mouse medial entorhinal cortex: grid cells and modules all clustered anatomically. Within a module, the layout of grid cells was a noisy two-dimensional lattice in which the anatomical distribution of grid cells largely matched their spatial tuning phases. This micro-arrangement of phases demonstrates the existence of a topographical map encoding a cognitive variable in rodents. It contributes to a foundation for evaluating circuit models of the grid cell network and is consistent with continuous attractor models as the mechanism of grid formation.


Subject(s)
Entorhinal Cortex/cytology , Grid Cells/cytology , Animals , Entorhinal Cortex/physiology , Grid Cells/physiology , Male , Mice , Mice, Inbred C57BL , Nerve Net
12.
Elife ; 72018 07 09.
Article in English | MEDLINE | ID: mdl-29985132

ABSTRACT

A goal of systems neuroscience is to discover the circuit mechanisms underlying brain function. Despite experimental advances that enable circuit-wide neural recording, the problem remains open in part because solving the 'inverse problem' of inferring circuity and mechanism by merely observing activity is hard. In the grid cell system, we show through modeling that a technique based on global circuit perturbation and examination of a novel theoretical object called the distribution of relative phase shifts (DRPS) could reveal the mechanisms of a cortical circuit at unprecedented detail using extremely sparse neural recordings. We establish feasibility, showing that the method can discriminate between recurrent versus feedforward mechanisms and amongst various recurrent mechanisms using recordings from a handful of cells. The proposed strategy demonstrates that sparse recording coupled with simple perturbation can reveal more about circuit mechanism than can full knowledge of network activity or the synaptic connectivity matrix.


Subject(s)
Grid Cells/physiology , Nerve Net/physiology , Computer Simulation , Decision Trees , Models, Neurological , Neural Inhibition/physiology , Nonlinear Dynamics , Uncertainty
13.
Elife ; 62017 09 07.
Article in English | MEDLINE | ID: mdl-28879851

ABSTRACT

It is widely believed that persistent neural activity underlies short-term memory. Yet, as we show, the degradation of information stored directly in such networks behaves differently from human short-term memory performance. We build a more general framework where memory is viewed as a problem of passing information through noisy channels whose degradation characteristics resemble those of persistent activity networks. If the brain first encoded the information appropriately before passing the information into such networks, the information can be stored substantially more faithfully. Within this framework, we derive a fundamental lower-bound on recall precision, which declines with storage duration and number of stored items. We show that human performance, though inconsistent with models involving direct (uncoded) storage in persistent activity networks, can be well-fit by the theoretical bound. This finding is consistent with the view that if the brain stores information in patterns of persistent activity, it might use codes that minimize the effects of noise, motivating the search for such codes in the brain.


Subject(s)
Brain/physiology , Memory, Short-Term , Neurons/physiology , Adult , Female , Humans , Male , Models, Neurological , Young Adult
14.
Neuron ; 89(5): 1086-99, 2016 Mar 02.
Article in English | MEDLINE | ID: mdl-26898777

ABSTRACT

Grid cells, defined by their striking periodic spatial responses in open 2D arenas, appear to respond differently on 1D tracks: the multiple response fields are not periodically arranged, peak amplitudes vary across fields, and the mean spacing between fields is larger than in 2D environments. We ask whether such 1D responses are consistent with the system's 2D dynamics. Combining analytical and numerical methods, we show that the 1D responses of grid cells with stable 1D fields are consistent with a linear slice through a 2D triangular lattice. Further, the 1D responses of comodular cells are well described by parallel slices, and the offsets in the starting points of the 1D slices can predict the measured 2D relative spatial phase between the cells. From these results, we conclude that the 2D dynamics of these cells is preserved in 1D, suggesting a common computation during both types of navigation behavior.


Subject(s)
Membrane Potentials/physiology , Models, Neurological , Neurons/physiology , Space Perception/physiology , Animals , Fourier Analysis , Humans , Mathematics , Population Dynamics
15.
Curr Biol ; 25(13): 1771-6, 2015 Jun 29.
Article in English | MEDLINE | ID: mdl-26073138

ABSTRACT

Accurate wayfinding is essential to the survival of many animal species and requires the ability to maintain spatial orientation during locomotion. One of the ways that humans and other animals stay spatially oriented is through path integration, which operates by integrating self-motion cues over time, providing information about total displacement from a starting point. The neural substrate of path integration in mammals may exist in grid cells, which are found in dorsomedial entorhinal cortex and presubiculum and parasubiculum in rats. Grid cells have also been found in mice, bats, and monkeys, and signatures of grid cell activity have been observed in humans. We demonstrate that distance estimation by humans during path integration is sensitive to geometric deformations of a familiar environment and show that patterns of path integration error are predicted qualitatively by a model in which locations in the environment are represented in the brain as phases of arrays of grid cells with unique periods and decoded by the inverse mapping from phases to locations. The periods of these grid networks are assumed to expand and contract in response to expansions and contractions of a familiar environment. Biases in distance estimation occur when the periods of the encoding and decoding grids differ. Our findings explicate the way in which grid cells could function in human path integration.


Subject(s)
Entorhinal Cortex/physiology , Models, Neurological , Orientation/physiology , Spatial Navigation/physiology , Spatial Processing/physiology , Entorhinal Cortex/cytology , Feedback, Sensory , Female , Humans , Locomotion/physiology , Male , Photic Stimulation
16.
Neuron ; 83(2): 481-495, 2014 Jul 16.
Article in English | MEDLINE | ID: mdl-25033187

ABSTRACT

Grid cell responses develop gradually after eye opening, but little is known about the rules that govern this process. We present a biologically plausible model for the formation of a grid cell network. An asymmetric spike time-dependent plasticity rule acts upon an initially unstructured network of spiking neurons that receive inputs encoding animal velocity and location. Neurons develop an organized recurrent architecture based on the similarity of their inputs, interacting through inhibitory interneurons. The mature network can convert velocity inputs into estimates of animal location, showing that spatially periodic responses and the capacity of path integration can arise through synaptic plasticity, acting on inputs that display neither. The model provides numerous predictions about the necessity of spatial exploration for grid cell development, network topography, the maturation of velocity tuning and neural correlations, the abrupt transition to stable patterned responses, and possible mechanisms to set grid period across grid modules.


Subject(s)
Action Potentials/physiology , Exploratory Behavior/physiology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/physiology , Animals , Hippocampus/physiology , Interneurons/physiology , Spatial Behavior/physiology
17.
Nat Neurosci ; 16(8): 1077-84, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23852111

ABSTRACT

We examined simultaneously recorded spikes from multiple rat grid cells, to explain mechanisms underlying their activity. Among grid cells with similar spatial periods, the population activity was confined to lie close to a two-dimensional (2D) manifold: grid cells differed only along two dimensions of their responses and otherwise were nearly identical. Relationships between cell pairs were conserved despite extensive deformations of single-neuron responses. Results from novel environments suggest such structure is not inherited from hippocampal or external sensory inputs. Across conditions, cell-cell relationships are better conserved than responses of single cells. Finally, the system is continually subject to perturbations that, were the 2D manifold not attractive, would drive the system to inhabit a different region of state space than observed. These findings have strong implications for theories of grid-cell activity and substantiate the general hypothesis that the brain computes using low-dimensional continuous attractors.


Subject(s)
Entorhinal Cortex/cytology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Space Perception/physiology , Spatial Behavior/physiology , Action Potentials , Algorithms , Animals , Computer Simulation , Entorhinal Cortex/physiology , Exploratory Behavior/physiology , Neural Networks, Computer , Patch-Clamp Techniques , Pattern Recognition, Visual/physiology , Rats
18.
Proc Natl Acad Sci U S A ; 109(43): 17645-50, 2012 Oct 23.
Article in English | MEDLINE | ID: mdl-23047704

ABSTRACT

Neural noise limits the fidelity of representations in the brain. This limitation has been extensively analyzed for sensory coding. However, in short-term memory and integrator networks, where noise accumulates and can play an even more prominent role, much less is known about how neural noise interacts with neural and network parameters to determine the accuracy of the computation. Here we analytically derive how the stored memory in continuous attractor networks of probabilistically spiking neurons will degrade over time through diffusion. By combining statistical and dynamical approaches, we establish a fundamental limit on the network's ability to maintain a persistent state: The noise-induced drift of the memory state over time within the network is strictly lower-bounded by the accuracy of estimation of the network's instantaneous memory state by an ideal external observer. This result takes the form of an information-diffusion inequality. We derive some unexpected consequences: Despite the persistence time of short-term memory networks, it does not pay to accumulate spikes for longer than the cellular time-constant to read out their contents. For certain neural transfer functions, the conditions for optimal sensory coding coincide with those for optimal storage, implying that short-term memory may be co-localized with sensory representation.


Subject(s)
Neurons/physiology , Action Potentials , Poisson Distribution , Probability , Stochastic Processes
19.
Neuron ; 66(3): 331-4, 2010 May 13.
Article in English | MEDLINE | ID: mdl-20471346

ABSTRACT

In this issue of Neuron, Remme and colleagues examine the biophysics of synchronization between oscillating dendrites and soma. Their findings suggest that oscillators will quickly phase-lock when weakly coupled. These findings are at odds with assumptions of an influential model of grid cell response generation and have implications for grid cell response mechanisms.

20.
Neuron ; 65(4): 563-76, 2010 Feb 25.
Article in English | MEDLINE | ID: mdl-20188660

ABSTRACT

Sequential neural activity patterns are as ubiquitous as the outputs they drive, which include motor gestures and sequential cognitive processes. Neural sequences are long, compared to the activation durations of participating neurons, and sequence coding is sparse. Numerous studies demonstrate that spike-time-dependent plasticity (STDP), the primary known mechanism for temporal order learning in neurons, cannot organize networks to generate long sequences, raising the question of how such networks are formed. We show that heterosynaptic competition within single neurons, when combined with STDP, organizes networks to generate long unary activity sequences even without sequential training inputs. The network produces a diversity of sequences with a power law length distribution and exponent -1, independent of cellular time constants. We show evidence for a similar distribution of sequence lengths in the recorded premotor song activity of songbirds. These results suggest that neural sequences may be shaped by synaptic constraints and network circuitry rather than cellular time constants.


Subject(s)
Nerve Net/physiology , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/physiology , Action Potentials/physiology , Animals , Electrophysiology , Finches , High Vocal Center/physiology , Learning/physiology , Membrane Potentials/physiology , Models, Neurological , Neural Conduction/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...