Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS Comput Biol ; 19(8): e1011342, 2023 08.
Article in English | MEDLINE | ID: mdl-37603559

ABSTRACT

Bayesian Active Learning (BAL) is an efficient framework for learning the parameters of a model, in which input stimuli are selected to maximize the mutual information between the observations and the unknown parameters. However, the applicability of BAL to experiments is limited as it requires performing high-dimensional integrations and optimizations in real time. Current methods are either too time consuming, or only applicable to specific models. Here, we propose an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments. We apply our method to the problem of estimating the parameters of a chemical synapse from the postsynaptic responses to evoked presynaptic action potentials. Using synthetic data and synaptic whole-cell patch-clamp recordings, we show that our method can improve the precision of model-based inferences, thereby paving the way towards more systematic and efficient experimental designs in physiology.


Subject(s)
Problem-Based Learning , Research Design , Bayes Theorem , Action Potentials , Patch-Clamp Techniques
2.
PLoS Comput Biol ; 18(2): e1009721, 2022 02.
Article in English | MEDLINE | ID: mdl-35196324

ABSTRACT

Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.e., a principled method for including time and parameter uncertainty. We derive the filtering-based learning rule for a spiking neuronal network-the Synaptic Filter-and show its computational and biological relevance. For the computational relevance, we show that filtering improves the weight estimation performance compared to a gradient learning rule with optimal learning rate. The dynamics of the mean of the Synaptic Filter is consistent with spike-timing dependent plasticity (STDP) while the dynamics of the variance makes novel predictions regarding spike-timing dependent changes of EPSP variability. Moreover, the Synaptic Filter explains experimentally observed negative correlations between homo- and heterosynaptic plasticity.


Subject(s)
Models, Neurological , Neuronal Plasticity , Action Potentials/physiology , Algorithms , Learning/physiology , Neuronal Plasticity/physiology , Neurons/physiology
3.
PLoS Comput Biol ; 16(4): e1007640, 2020 04.
Article in English | MEDLINE | ID: mdl-32271761

ABSTRACT

This is a PLOS Computational Biology Education paper. The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Because a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Because the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here, we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature, and propose ways forward to constrain the metric.


Subject(s)
Biophysics/methods , Brain/diagnostic imaging , Brain/physiology , Computational Biology/methods , Algorithms , Computer Simulation , Humans , Image Processing, Computer-Assisted , Kinetics , Mathematics , Neural Networks, Computer , Neurosciences/methods
4.
Sci Rep ; 7(1): 17585, 2017 12 11.
Article in English | MEDLINE | ID: mdl-29229925

ABSTRACT

A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper.

5.
Sci Rep ; 7(1): 8722, 2017 08 18.
Article in English | MEDLINE | ID: mdl-28821729

ABSTRACT

The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.


Subject(s)
Learning , Neurons/physiology , Nonlinear Dynamics , Perception/physiology , Algorithms , Bayes Theorem , Models, Neurological , Sensation
6.
PLoS One ; 10(11): e0142435, 2015.
Article in English | MEDLINE | ID: mdl-26571371

ABSTRACT

Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.


Subject(s)
Neurons/physiology , Action Potentials/physiology , Algorithms , Animals , Computer Simulation , Fourier Analysis , Linear Models , Membrane Potentials , Models, Neurological , Neurons/metabolism , Normal Distribution , Probability , Stochastic Processes , Synapses/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...