Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 90
Filter
Add more filters










Publication year range
1.
Elife ; 122024 May 07.
Article in English | MEDLINE | ID: mdl-38712831

ABSTRACT

Representational drift refers to the dynamic nature of neural representations in the brain despite the behavior being seemingly stable. Although drift has been observed in many different brain regions, the mechanisms underlying it are not known. Since intrinsic neural excitability is suggested to play a key role in regulating memory allocation, fluctuations of excitability could bias the reactivation of previously stored memory ensembles and therefore act as a motor for drift. Here, we propose a rate-based plastic recurrent neural network with slow fluctuations of intrinsic excitability. We first show that subsequent reactivations of a neural ensemble can lead to drift of this ensemble. The model predicts that drift is induced by co-activation of previously active neurons along with neurons with high excitability which leads to remodeling of the recurrent weights. Consistent with previous experimental works, the drifting ensemble is informative about its temporal history. Crucially, we show that the gradual nature of the drift is necessary for decoding temporal information from the activity of the ensemble. Finally, we show that the memory is preserved and can be decoded by an output neuron having plastic synapses with the main region.


Subject(s)
Models, Neurological , Neuronal Plasticity , Neurons , Neurons/physiology , Neuronal Plasticity/physiology , Memory/physiology , Brain/physiology , Nerve Net/physiology , Animals , Humans , Action Potentials/physiology
2.
Sci Rep ; 14(1): 10536, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38719897

ABSTRACT

Precisely timed and reliably emitted spikes are hypothesized to serve multiple functions, including improving the accuracy and reproducibility of encoding stimuli, memories, or behaviours across trials. When these spikes occur as a repeating sequence, they can be used to encode and decode a potential time series. Here, we show both analytically and in simulations that the error incurred in approximating a time series with precisely timed and reliably emitted spikes decreases linearly with the number of neurons or spikes used in the decoding. This was verified numerically with synthetically generated patterns of spikes. Further, we found that if spikes were imprecise in their timing, or unreliable in their emission, the error incurred in decoding with these spikes would be sub-linear. However, if the spike precision or spike reliability increased with network size, the error incurred in decoding a time-series with sequences of spikes would maintain a linear decrease with network size. The spike precision had to increase linearly with network size, while the probability of spike failure had to decrease with the square-root of the network size. Finally, we identified a candidate circuit to test this scaling relationship: the repeating sequences of spikes with sub-millisecond precision in area HVC (proper name) of the zebra finch. This scaling relationship can be tested using both neural data and song-spectrogram-based recordings while taking advantage of the natural fluctuation in HVC network size due to neurogenesis.


Subject(s)
Action Potentials , Models, Neurological , Neurons , Animals , Action Potentials/physiology , Neurons/physiology , Vocalization, Animal/physiology , Reproducibility of Results
3.
Nat Commun ; 15(1): 4084, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38744847

ABSTRACT

Animals can quickly adapt learned movements to external perturbations, and their existing motor repertoire likely influences their ease of adaptation. Long-term learning causes lasting changes in neural connectivity, which shapes the activity patterns that can be produced during adaptation. Here, we examined how a neural population's existing activity patterns, acquired through de novo learning, affect subsequent adaptation by modeling motor cortical neural population dynamics with recurrent neural networks. We trained networks on different motor repertoires comprising varying numbers of movements, which they acquired following various learning experiences. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization in the available population activity patterns. This structure facilitated adaptation, but only when the changes imposed by the perturbation were congruent with the organization of the inputs and the structure in neural activity acquired during de novo learning. These results highlight trade-offs in skill acquisition and demonstrate how different learning experiences can shape the geometrical properties of neural population activity and subsequent adaptation.


Subject(s)
Adaptation, Physiological , Learning , Models, Neurological , Motor Cortex , Learning/physiology , Adaptation, Physiological/physiology , Motor Cortex/physiology , Animals , Neural Networks, Computer , Neurons/physiology , Movement/physiology , Nerve Net/physiology
4.
PLoS Comput Biol ; 20(5): e1012110, 2024 May.
Article in English | MEDLINE | ID: mdl-38743789

ABSTRACT

Filopodia are thin synaptic protrusions that have been long known to play an important role in early development. Recently, they have been found to be more abundant in the adult cortex than previously thought, and more plastic than spines (button-shaped mature synapses). Inspired by these findings, we introduce a new model of synaptic plasticity that jointly describes learning of filopodia and spines. The model assumes that filopodia exhibit strongly competitive learning dynamics -similarly to additive spike-timing-dependent plasticity (STDP). At the same time it proposes that, if filopodia undergo sufficient potentiation, they consolidate into spines. Spines follow weakly competitive learning, classically associated with multiplicative, soft-bounded models of STDP. This makes spines more stable and sensitive to the fine structure of input correlations. We show that our learning rule has a selectivity comparable to additive STDP and captures input correlations as well as multiplicative models of STDP. We also show how it can protect previously formed memories and perform synaptic consolidation. Overall, our results can be seen as a phenomenological description of how filopodia and spines could cooperate to overcome the individual difficulties faced by strong and weak competition mechanisms.


Subject(s)
Dendritic Spines , Learning , Models, Neurological , Neuronal Plasticity , Pseudopodia , Pseudopodia/physiology , Neuronal Plasticity/physiology , Dendritic Spines/physiology , Learning/physiology , Animals , Humans , Computational Biology , Synapses/physiology , Neurons/physiology , Action Potentials/physiology
5.
Elife ; 122024 Apr 24.
Article in English | MEDLINE | ID: mdl-38656279

ABSTRACT

The central tendency bias, or contraction bias, is a phenomenon where the judgment of the magnitude of items held in working memory appears to be biased toward the average of past observations. It is assumed to be an optimal strategy by the brain and commonly thought of as an expression of the brain's ability to learn the statistical structure of sensory input. On the other hand, recency biases such as serial dependence are also commonly observed and are thought to reflect the content of working memory. Recent results from an auditory delayed comparison task in rats suggest that both biases may be more related than previously thought: when the posterior parietal cortex (PPC) was silenced, both short-term and contraction biases were reduced. By proposing a model of the circuit that may be involved in generating the behavior, we show that a volatile working memory content susceptible to shifting to the past sensory experience - producing short-term sensory history biases - naturally leads to contraction bias. The errors, occurring at the level of individual trials, are sampled from the full distribution of the stimuli and are not due to a gradual shift of the memory toward the sensory distribution's mean. Our results are consistent with a broad set of behavioral findings and provide predictions of performance across different stimulus distributions and timings, delay intervals, as well as neuronal dynamics in putative working memory areas. Finally, we validate our model by performing a set of human psychophysics experiments of an auditory parametric working memory task.


During cognitive tasks, our brain needs to temporarily hold and manipulate the information it is processing to decide how best to respond. This ability, known as working memory, is influenced by how the brain represents and processes the sensory world around us, which can lead to biases, such as 'central tendency'. Consider an experiment where you are presented with a metal bar and asked to recall how long it was after a few seconds. Typically, our memories, averaged over many trials of repeating this memory recall test, appear to skew towards an average length, leading to the tendency to mis-remember the bar as being shorter or longer than it actually was. This central tendency occurs in most species, and is thought to be the result of the brain learning which sensory input is the most likely to occur out of the range of possibilities. Working memory is also influenced by short-term history or recency bias, where a recent past experience influences a current memory. Studies have shown that 'turning off' a region of the rat brain called the posterior parietal cortex removes the effects of both recency bias and central tendency on working memory. Here, Boboeva et al. reveal that these two biases, which were thought to be controlled by separate mechanisms, may in fact be related. Building on the inactivation study, the team modelled a circuit of neurons that can give rise to the results observed in the rat experiments, as well as behavioural results in humans and primates. The computational model contained two modules: one of which represented a putative working memory, and another which represented the posterior parietal cortex which relays sensory information about past experiences. Boboeva et al. found that sensory inputs relayed from the posterior parietal cortex module led to recency biases in working memory. As a result, central tendency naturally emerged without needing to add assumptions to the model about which sensory input is the most likely to occur. The computational model was also able to replicate all known previous experimental findings, and made some predictions that were tested and confirmed by psychophysics tests on human participants. The findings of Boboeva et al. provide a new potential mechanism for how central tendency emerges in working memory. The model suggests that to achieve central tendency prior knowledge of how a sensory stimulus is distributed in an environment is not required, as it naturally emerges due to a volatile working memory which is susceptible to errors. This is the first mechanistic model to unify these two sources of bias in working memory. In the future, this could help advance our understanding of certain psychiatric conditions in which working memory and sensory learning are impaired.


Subject(s)
Memory, Short-Term , Memory, Short-Term/physiology , Animals , Humans , Rats , Models, Neurological , Parietal Lobe/physiology
6.
J Neurosci ; 44(21)2024 May 22.
Article in English | MEDLINE | ID: mdl-38561228

ABSTRACT

Memories are thought to be stored in neural ensembles known as engrams that are specifically reactivated during memory recall. Recent studies have found that memory engrams of two events that happened close in time tend to overlap in the hippocampus and the amygdala, and these overlaps have been shown to support memory linking. It has been hypothesized that engram overlaps arise from the mechanisms that regulate memory allocation itself, involving neural excitability, but the exact process remains unclear. Indeed, most theoretical studies focus on synaptic plasticity and little is known about the role of intrinsic plasticity, which could be mediated by neural excitability and serve as a complementary mechanism for forming memory engrams. Here, we developed a rate-based recurrent neural network that includes both synaptic plasticity and neural excitability. We obtained structural and functional overlap of memory engrams for contexts that are presented close in time, consistent with experimental and computational studies. We then investigated the role of excitability in memory allocation at the network level and unveiled competitive mechanisms driven by inhibition. This work suggests mechanisms underlying the role of intrinsic excitability in memory allocation and linking, and yields predictions regarding the formation and the overlap of memory engrams.


Subject(s)
Memory , Neuronal Plasticity , Humans , Memory/physiology , Neuronal Plasticity/physiology , Models, Neurological , Neurons/physiology , Nerve Net/physiology , Animals , Neural Networks, Computer , Hippocampus/physiology
7.
Elife ; 132024 Feb 09.
Article in English | MEDLINE | ID: mdl-38334473

ABSTRACT

Generating synthetic locomotory and neural data is a useful yet cumbersome step commonly required to study theoretical models of the brain's role in spatial navigation. This process can be time consuming and, without a common framework, makes it difficult to reproduce or compare studies which each generate test data in different ways. In response, we present RatInABox, an open-source Python toolkit designed to model realistic rodent locomotion and generate synthetic neural data from spatially modulated cell types. This software provides users with (i) the ability to construct one- or two-dimensional environments with configurable barriers and visual cues, (ii) a physically realistic random motion model fitted to experimental data, (iii) rapid online calculation of neural data for many of the known self-location or velocity selective cell types in the hippocampal formation (including place cells, grid cells, boundary vector cells, head direction cells) and (iv) a framework for constructing custom cell types, multi-layer network models and data- or policy-controlled motion trajectories. The motion and neural models are spatially and temporally continuous as well as topographically sensitive to boundary conditions and walls. We demonstrate that out-of-the-box parameter settings replicate many aspects of rodent foraging behaviour such as velocity statistics and the tendency of rodents to over-explore walls. Numerous tutorial scripts are provided, including examples where RatInABox is used for decoding position from neural data or to solve a navigational reinforcement learning task. We hope this tool will significantly streamline computational research into the brain's role in navigation.


The brain is a complex system made up of over 100 billion neurons that interact to give rise to all sorts of behaviours. To understand how neural interactions enable distinct behaviours, neuroscientists often build computational models that can reproduce some of the interactions and behaviours observed in the brain. Unfortunately, good computational models can be hard to build, and it can be wasteful for different groups of scientists to each write their own software to model a similar system. Instead, it is more effective for scientists to share their code so that different models can be quickly built from an identical set of core elements. These toolkits should be well made, free and easy to use. One of the largest fields within neuroscience and machine learning concerns navigation: how does an organism ­ or an artificial agent ­ know where they are and how to get where they are going next? Scientists have identified many different types of neurons in the brain that are important for navigation. For example, 'place cells' fire whenever the animal is at a specific location, and 'head direction cells' fire when the animal's head is pointed in a particular direction. These and other neurons interact to support navigational behaviours. Despite the importance of navigation, no single computational toolkit existed to model these behaviours and neural circuits. To fill this gap, George et al. developed RatInABox, a toolkit that contains the building blocks needed to study the brain's role in navigation. One module, called the 'Environment', contains code for making arenas of arbitrary shapes. A second module contains code describing how organisms or 'Agents' move around the arena and interact with walls, objects, and other agents. A final module, called 'Neurons', contains code that reproduces the reponse patterns of well-known cell types involved in navigation. This module also has code for more generic, trainable neurons that can be used to model how machines and organisms learn. Environments, Agents and Neurons can be combined and modified in many ways, allowing users to rapidly construct complex models and generate artificial datasets. A diversity of tutorials, including how the package can be used for reinforcement learning (the study of how agents learn optimal motions) are provided. RatInABox will benefit many researchers interested in neuroscience and machine learning. It is particularly well positioned to bridge the gap between these two fields and drive a more brain-inspired approach to machine learning. RatInABox's userbase is fast growing, and it is quickly becoming one of the core computational tools used by scientists to understand the brain and navigation. Additionally, its ease of use and visual clarity means that it can be used as an accessible teaching tool for learning about spatial representations and navigation.


Subject(s)
Hippocampus , Learning , Hippocampus/physiology , Neurons , Models, Neurological , Locomotion
8.
Nat Commun ; 15(1): 687, 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-38263408

ABSTRACT

To successfully learn real-life behavioral tasks, animals must pair actions or decisions to the task's complex structure, which can depend on abstract combinations of sensory stimuli and internal logic. The hippocampus is known to develop representations of this complex structure, forming a so-called "cognitive map". However, the precise biophysical mechanisms driving the emergence of task-relevant maps at the population level remain unclear. We propose a model in which plateau-based learning at the single cell level, combined with reinforcement learning in an agent, leads to latent representational structures codependently evolving with behavior in a task-specific manner. In agreement with recent experimental data, we show that the model successfully develops latent structures essential for task-solving (cue-dependent "splitters") while excluding irrelevant ones. Finally, our model makes testable predictions concerning the co-dependent interactions between split representations and split behavioral policy during their evolution.


Subject(s)
Hippocampus , Learning , Animals , Biophysics , Policy , Reinforcement, Psychology
9.
Nat Neurosci ; 27(3): 561-572, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38243089

ABSTRACT

Episodic memories are encoded by experience-activated neuronal ensembles that remain necessary and sufficient for recall. However, the temporal evolution of memory engrams after initial encoding is unclear. In this study, we employed computational and experimental approaches to examine how the neural composition and selectivity of engrams change with memory consolidation. Our spiking neural network model yielded testable predictions: memories transition from unselective to selective as neurons drop out of and drop into engrams; inhibitory activity during recall is essential for memory selectivity; and inhibitory synaptic plasticity during memory consolidation is critical for engrams to become selective. Using activity-dependent labeling, longitudinal calcium imaging and a combination of optogenetic and chemogenetic manipulations in mouse dentate gyrus, we conducted contextual fear conditioning experiments that supported our model's predictions. Our results reveal that memory engrams are dynamic and that changes in engram composition mediated by inhibitory plasticity are crucial for the emergence of memory selectivity.


Subject(s)
Memory Consolidation , Memory, Episodic , Mice , Animals , Memory Consolidation/physiology , Mental Recall/physiology , Neurons/physiology , Fear/physiology
10.
Neuron ; 112(4): 628-645.e7, 2024 Feb 21.
Article in English | MEDLINE | ID: mdl-38070500

ABSTRACT

Attentional modulation of sensory processing is a key feature of cognition; however, its neural circuit basis is poorly understood. A candidate mechanism is the disinhibition of pyramidal cells through vasoactive intestinal peptide (VIP) and somatostatin (SOM)-positive interneurons. However, the interaction of attentional modulation and VIP-SOM disinhibition has never been directly tested. We used all-optical methods to bi-directionally manipulate VIP interneuron activity as mice performed a cross-modal attention-switching task. We measured the activities of VIP, SOM, and parvalbumin (PV)-positive interneurons and pyramidal neurons identified in the same tissue and found that although activity in all cell classes was modulated by both attention and VIP manipulation, their effects were orthogonal. Attention and VIP-SOM disinhibition relied on distinct patterns of changes in activity and reorganization of interactions between inhibitory and excitatory cells. Circuit modeling revealed a precise network architecture consistent with multiplexing strong yet non-interacting modulations in the same neural population.


Subject(s)
Nervous System Physiological Phenomena , Vasoactive Intestinal Peptide , Animals , Mice , Primary Visual Cortex , Sensation , Interneurons , Parvalbumins
11.
Nature ; 625(7994): 338-344, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38123682

ABSTRACT

The medial entorhinal cortex (MEC) hosts many of the brain's circuit elements for spatial navigation and episodic memory, operations that require neural activity to be organized across long durations of experience1. Whereas location is known to be encoded by spatially tuned cell types in this brain region2,3, little is known about how the activity of entorhinal cells is tied together over time at behaviourally relevant time scales, in the second-to-minute regime. Here we show that MEC neuronal activity has the capacity to be organized into ultraslow oscillations, with periods ranging from tens of seconds to minutes. During these oscillations, the activity is further organized into periodic sequences. Oscillatory sequences manifested while mice ran at free pace on a rotating wheel in darkness, with no change in location or running direction and no scheduled rewards. The sequences involved nearly the entire cell population, and transcended epochs of immobility. Similar sequences were not observed in neighbouring parasubiculum or in visual cortex. Ultraslow oscillatory sequences in MEC may have the potential to couple neurons and circuits across extended time scales and serve as a template for new sequence formation during navigation and episodic memory formation.


Subject(s)
Entorhinal Cortex , Neurons , Periodicity , Animals , Mice , Entorhinal Cortex/cytology , Entorhinal Cortex/physiology , Neurons/physiology , Parahippocampal Gyrus/physiology , Running/physiology , Time Factors , Darkness , Visual Cortex/physiology , Neural Pathways , Spatial Navigation/physiology , Memory, Episodic
12.
bioRxiv ; 2023 Nov 11.
Article in English | MEDLINE | ID: mdl-37986793

ABSTRACT

Discrimination and generalization are crucial brain-wide functions for memory and object recognition that utilize pattern separation and completion computations. Circuit mechanisms supporting these operations remain enigmatic. We show lateral entorhinal cortex glutamatergic (LEC GLU ) and GABAergic (LEC GABA ) projections are essential for object recognition memory. Silencing LEC GLU during in vivo two-photon imaging increased the population of active CA3 pyramidal cells but decreased activity rates, suggesting a sparse coding function through local inhibition. Silencing LEC GLU also decreased place cell remapping between different environments validating this circuit drives pattern separation and context discrimination. Optogenetic circuit mapping confirmed that LEC GLU drives dominant feedforward inhibition to prevent CA3 somatic and dendritic spikes. However, conjunctively active LEC GABA suppresses this local inhibition to disinhibit CA3 pyramidal neuron soma and selectively boost integrative output of LEC and CA3 recurrent network. LEC GABA thus promotes pattern completion and context generalization. Indeed, without this disinhibitory input, CA3 place maps show decreased similarity between contexts. Our findings provide circuit mechanisms whereby long-range glutamatergic and GABAergic cortico-hippocampal inputs bidirectionally modulate pattern separation and completion, providing neuronal representations with a dynamic range for context discrimination and generalization.

13.
Nat Neurosci ; 26(12): 2158-2170, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37919424

ABSTRACT

Neuronal homeostasis prevents hyperactivity and hypoactivity. Age-related hyperactivity suggests homeostasis may be dysregulated in later life. However, plasticity mechanisms preventing age-related hyperactivity and their efficacy in later life are unclear. We identify the adult cortical plasticity response to elevated activity driven by sensory overstimulation, then test how plasticity changes with age. We use in vivo two-photon imaging of calcium-mediated cellular/synaptic activity, electrophysiology and c-Fos-activity tagging to show control of neuronal activity is dysregulated in the visual cortex in late adulthood. Specifically, in young adult cortex, mGluR5-dependent population-wide excitatory synaptic weakening and inhibitory synaptogenesis reduce cortical activity following overstimulation. In later life, these mechanisms are downregulated, so that overstimulation results in synaptic strengthening and elevated activity. We also find overstimulation disrupts cognition in older but not younger animals. We propose that specific plasticity mechanisms fail in later life dysregulating neuronal microcircuit homeostasis and that the age-related response to overstimulation can impact cognitive performance.


Subject(s)
Neurons , Visual Cortex , Animals , Neurons/physiology , Homeostasis/physiology , Visual Cortex/physiology , Neuronal Plasticity/physiology
14.
Res Sq ; 2023 Sep 19.
Article in English | MEDLINE | ID: mdl-37790466

ABSTRACT

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.

15.
PLoS Comput Biol ; 19(8): e1011362, 2023 08.
Article in English | MEDLINE | ID: mdl-37549193

ABSTRACT

The activity of neurons in the visual cortex is often characterized by tuning curves, which are thought to be shaped by Hebbian plasticity during development and sensory experience. This leads to the prediction that neural circuits should be organized such that neurons with similar functional preference are connected with stronger weights. In support of this idea, previous experimental and theoretical work have provided evidence for a model of the visual cortex characterized by such functional subnetworks. A recent experimental study, however, have found that the postsynaptic preferred stimulus was defined by the total number of spines activated by a given stimulus and independent of their individual strength. While this result might seem to contradict previous literature, there are many factors that define how a given synaptic input influences postsynaptic selectivity. Here, we designed a computational model in which postsynaptic functional preference is defined by the number of inputs activated by a given stimulus. Using a plasticity rule where synaptic weights tend to correlate with presynaptic selectivity, and is independent of functional-similarity between pre- and postsynaptic activity, we find that this model can be used to decode presented stimuli in a manner that is comparable to maximum likelihood inference.


Subject(s)
Models, Neurological , Visual Cortex , Neurons/physiology , Visual Cortex/physiology , Neuronal Plasticity/physiology , Synapses/physiology
16.
Sci Rep ; 13(1): 12939, 2023 08 09.
Article in English | MEDLINE | ID: mdl-37558704

ABSTRACT

The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting from the neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.


Subject(s)
Algorithms , Neural Networks, Computer , Action Potentials/physiology , Learning , Brain/physiology , Models, Neurological
17.
bioRxiv ; 2023 May 24.
Article in English | MEDLINE | ID: mdl-37293081

ABSTRACT

Animals can quickly adapt learned movements in response to external perturbations. Motor adaptation is likely influenced by an animal's existing movement repertoire, but the nature of this influence is unclear. Long-term learning causes lasting changes in neural connectivity which determine the activity patterns that can be produced. Here, we sought to understand how a neural population's activity repertoire, acquired through long-term learning, affects short-term adaptation by modeling motor cortical neural population dynamics during de novo learning and subsequent adaptation using recurrent neural networks. We trained these networks on different motor repertoires comprising varying numbers of movements. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization created by the neural population activity patterns corresponding to each movement. This structure facilitated adaptation, but only when small changes in motor output were required, and when the structure of the network inputs, the neural activity space, and the perturbation were congruent. These results highlight trade-offs in skill acquisition and demonstrate how prior experience and external cues during learning can shape the geometrical properties of neural population activity as well as subsequent adaptation.

18.
Cell Rep ; 42(5): 112397, 2023 05 30.
Article in English | MEDLINE | ID: mdl-37074915

ABSTRACT

Excitatory synapses are typically described as single synaptic boutons (SSBs), where one presynaptic bouton contacts a single postsynaptic spine. Using serial section block-face scanning electron microscopy, we found that this textbook definition of the synapse does not fully apply to the CA1 region of the hippocampus. Roughly half of all excitatory synapses in the stratum oriens involved multi-synaptic boutons (MSBs), where a single presynaptic bouton containing multiple active zones contacted many postsynaptic spines (from 2 to 7) on the basal dendrites of different cells. The fraction of MSBs increased during development (from postnatal day 22 [P22] to P100) and decreased with distance from the soma. Curiously, synaptic properties such as active zone (AZ) or postsynaptic density (PSD) size exhibited less within-MSB variation when compared with neighboring SSBs, features that were confirmed by super-resolution light microscopy. Computer simulations suggest that these properties favor synchronous activity in CA1 networks.


Subject(s)
Hippocampus , Presynaptic Terminals , Synapses , Neurons , Dendrites
19.
Sci Rep ; 13(1): 6543, 2023 04 21.
Article in English | MEDLINE | ID: mdl-37085642

ABSTRACT

With Hebbian learning 'who fires together wires together', well-known problems arise. Hebbian plasticity can cause unstable network dynamics and overwrite stored memories. Because the known homeostatic plasticity mechanisms tend to be too slow to combat unstable dynamics, it has been proposed that plasticity must be highly gated and synaptic strengths limited. While solving the issue of stability, gating and limiting plasticity does not solve the stability-plasticity dilemma. We propose that dendrites enable both stable network dynamics and considerable synaptic changes, as they allow the gating of plasticity in a compartment-specific manner. We investigate how gating plasticity influences network stability in plastic balanced spiking networks of neurons with dendrites. We compare how different ways to gate plasticity, namely via modulating excitability, learning rate, and inhibition increase stability. We investigate how dendritic versus perisomatic gating allows for different amounts of weight changes in stable networks. We suggest that the compartmentalisation of pyramidal cells enables dendritic synaptic changes while maintaining stability. We show that the coupling between dendrite and soma is critical for the plasticity-stability trade-off. Finally, we show that spatially restricted plasticity additionally improves stability.


Subject(s)
Dendrites , Neuronal Plasticity , Dendrites/physiology , Neuronal Plasticity/physiology , Learning , Pyramidal Cells/physiology , Homeostasis , Action Potentials/physiology
20.
Nat Commun ; 14(1): 1597, 2023 03 22.
Article in English | MEDLINE | ID: mdl-36949048

ABSTRACT

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Subject(s)
Artificial Intelligence , Neurosciences , Animals , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...