Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
bioRxiv ; 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38562772

ABSTRACT

Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.

2.
Elife ; 122023 01 19.
Article in English | MEDLINE | ID: mdl-36655738

ABSTRACT

By means of an expansive innervation, the serotonin (5-HT) neurons of the dorsal raphe nucleus (DRN) are positioned to enact coordinated modulation of circuits distributed across the entire brain in order to adaptively regulate behavior. Yet the network computations that emerge from the excitability and connectivity features of the DRN are still poorly understood. To gain insight into these computations, we began by carrying out a detailed electrophysiological characterization of genetically identified mouse 5-HT and somatostatin (SOM) neurons. We next developed a single-neuron modeling framework that combines the realism of Hodgkin-Huxley models with the simplicity and predictive power of generalized integrate-and-fire models. We found that feedforward inhibition of 5-HT neurons by heterogeneous SOM neurons implemented divisive inhibition, while endocannabinoid-mediated modulation of excitatory drive to the DRN increased the gain of 5-HT output. Our most striking finding was that the output of the DRN encodes a mixture of the intensity and temporal derivative of its input, and that the temporal derivative component dominates this mixture precisely when the input is increasing rapidly. This network computation primarily emerged from prominent adaptation mechanisms found in 5-HT neurons, including a previously undescribed dynamic threshold. By applying a bottom-up neural network modeling approach, our results suggest that the DRN is particularly apt to encode input changes over short timescales, reflecting one of the salient emerging computations that dominate its output to regulate behavior.


Subject(s)
Dorsal Raphe Nucleus , Serotonin , Mice , Animals , Dorsal Raphe Nucleus/physiology , Serotonin/physiology , Neurons/physiology , Neural Networks, Computer
4.
Sci Rep ; 11(1): 15910, 2021 08 05.
Article in English | MEDLINE | ID: mdl-34354118

ABSTRACT

The burst coding hypothesis posits that the occurrence of sudden high-frequency patterns of action potentials constitutes a salient syllable of the neural code. Many neurons, however, do not produce clearly demarcated bursts, an observation invoked to rule out the pervasiveness of this coding scheme across brain areas and cell types. Here we ask how detrimental ambiguous spike patterns, those that are neither clearly bursts nor isolated spikes, are for neuronal information transfer. We addressed this question using information theory and computational simulations. By quantifying how information transmission depends on firing statistics, we found that the information transmitted is not strongly influenced by the presence of clearly demarcated modes in the interspike interval distribution, a feature often used to identify the presence of burst coding. Instead, we found that neurons having unimodal interval distributions were still able to ascribe different meanings to bursts and isolated spikes. In this regime, information transmission depends on dynamical properties of the synapses as well as the length and relative frequency of bursts. Furthermore, we found that common metrics used to quantify burstiness were unable to predict the degree with which bursts could be used to carry information. Our results provide guiding principles for the implementation of coding strategies based on spike-timing patterns, and show that even unimodal firing statistics can be consistent with a bivariate neural code.

5.
Nat Neurosci ; 24(7): 1010-1019, 2021 07.
Article in English | MEDLINE | ID: mdl-33986551

ABSTRACT

Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well established that it depends on pre- and postsynaptic activity. However, models that rely solely on pre- and postsynaptic activity for synaptic changes have, so far, not been able to account for learning complex tasks that demand credit assignment in hierarchical networks. Here we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then pyramidal neurons higher in a hierarchical circuit can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.


Subject(s)
Deep Learning , Learning/physiology , Models, Neurological , Neuronal Plasticity/physiology , Pyramidal Cells/physiology , Animals , Humans
6.
Curr Opin Neurobiol ; 58: 78-85, 2019 10.
Article in English | MEDLINE | ID: mdl-31419712

ABSTRACT

Dendrites are much more than passive neuronal components. Mounting experimental evidence and decades of computational work have decisively shown that dendrites leverage a host of nonlinear biophysical phenomena and actively participate in sophisticated computations, at the level of the single neuron and at the level of the network. However, a coherent view of their processing power is still lacking and dendrites are largely neglected in neural network models. Here, we describe four classes of dendritic information processing and delineate their implications at the algorithmic level. We propose that beyond the well-known spatiotemporal filtering of their inputs, dendrites are capable of selecting, routing and multiplexing information. By separating dendritic processing from axonal outputs, neuron networks gain a degree of freedom with implications for perception and learning.


Subject(s)
Dendrites , Models, Neurological , Action Potentials , Learning , Neural Networks, Computer
7.
PLoS Comput Biol ; 13(8): e1005691, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28787447

ABSTRACT

Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity-the density of active neurons per unit time-is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics.


Subject(s)
Action Potentials/physiology , Nerve Net/physiology , Neurons/physiology , Animals , Brain/physiopathology , Computational Biology , Models, Neurological , Stochastic Processes
8.
Article in English | MEDLINE | ID: mdl-26274199

ABSTRACT

We demonstrate how rhythmic activity can arise in neural networks from feedforward rather than recurrent circuitry and, in so doing, we provide a mechanism capable of explaining the temporal decorrelation of γ-band oscillations. We compare the spiking activity of a delayed recurrent network of inhibitory neurons with that of a feedforward network with the same neural properties and axonal delays. Paradoxically, these very different connectivities can yield very similar spike-train statistics in response to correlated input. This happens when neurons are noisy and axonal delays are short. A Taylor expansion of the feedback network's susceptibility-or frequency-dependent gain function-can then be stopped at first order to a good approximation, thus matching the feedforward net's susceptibility. The feedback network is known to display oscillations; these oscillations imply that the spiking activity of the population is felt by all neurons within the network, leading to direct spike correlations in a given neuron. On the other hand, in the output layer of the feedforward net, the interaction between the external drive and the delayed feedforward projection of this drive by the input layer causes indirect spike correlations: spikes fired by a given output layer neuron are correlated only through the activity of the input layer neurons. High noise and short delays partially bridge the gap between these two types of correlation, yielding similar spike-train statistics for both networks. This similarity is even stronger when the delay is distributed, as confirmed by linear response theory.


Subject(s)
Models, Neurological , Neurons/physiology , Action Potentials/physiology , Feedback , Gamma Rhythm/physiology , Periodicity
9.
Article in English | MEDLINE | ID: mdl-24616694

ABSTRACT

The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry-also known as "open-loop feedback"-, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.

SELECTION OF CITATIONS
SEARCH DETAIL
...