Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 74
Filter
1.
Biosystems ; 222: 104780, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36179938

ABSTRACT

We present a comparison of the intrinsic saturation of firing frequency in four simple neural models: leaky integrate-and-fire model, leaky integrate-and-fire model with reversal potentials, two-point leaky integrate-and-fire model, and a two-point leaky integrate-and-fire model with reversal potentials. "Two-point" means that the equivalent circuit has two nodes (dendritic and somatic) instead of one (somatic only). The results suggest that the reversal potential increases the slope of the "firing rate vs input" curve due to a smaller effective membrane time constant, but does not necessarily induce saturation of the firing rate. The two-point model without the reversal potential does not limit the voltage or the firing rate. In contrast to the previous models, the two-point model with the reversal potential limits the asymptotic voltage and the firing rate, which is the main result of this paper. The case of excitatory inputs is considered first and followed by the case of both excitatory and inhibitory inputs.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology , Membrane Potentials/physiology , Physical Phenomena , Action Potentials/physiology
2.
Front Comput Neurosci ; 14: 569049, 2020.
Article in English | MEDLINE | ID: mdl-33328945

ABSTRACT

The Fano factor, defined as the variance-to-mean ratio of spike counts in a time window, is often used to measure the variability of neuronal spike trains. However, despite its transparent definition, careless use of the Fano factor can easily lead to distorted or even wrong results. One of the problems is the unclear dependence of the Fano factor on the spiking rate, which is often neglected or handled insufficiently. In this paper we aim to explore this problem in more detail and to study the possible solution, which is to evaluate the Fano factor in the operational time. We use equilibrium renewal and Markov renewal processes as spike train models to describe the method in detail, and we provide an illustration on experimental data.

3.
Chaos ; 28(10): 106305, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30384662

ABSTRACT

The rate coding hypothesis is the oldest and still one of the most accepted and investigated scenarios in neuronal activity analyses. However, the actual neuronal firing rate, while informally understood, can be mathematically defined in several different ways. These definitions yield distinct results; even their average values may differ dramatically for the simplest neuronal models. Such an inconsistency, together with the importance of "firing rate," motivates us to revisit the classical concept of the instantaneous firing rate. We confirm that different notions of firing rate can in fact be compatible, at least in terms of their averages, by carefully discerning the time instant at which the neuronal activity is observed. Two general cases are distinguished: either the inspection time is synchronised with a reference time or with the neuronal spiking. The statistical properties of the instantaneous firing rate, including parameter estimation, are analyzed, and compatibility with the intuitively understood concept is demonstrated.


Subject(s)
Action Potentials , Nerve Net , Neurons/physiology , Axons/physiology , Computer Simulation , Humans , Models, Neurological , Models, Statistical , Normal Distribution , Poisson Distribution , Probability , Stochastic Processes , Synapses , Synaptic Transmission
4.
Chaos ; 28(10): 103119, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30384666

ABSTRACT

The Jacobi process is a stochastic diffusion characterized by a linear drift and a special form of multiplicative noise which keeps the process confined between two boundaries. One example of such a process can be obtained as the diffusion limit of the Stein's model of membrane depolarization which includes both excitatory and inhibitory reversal potentials. The reversal potentials create the two boundaries between which the process is confined. Solving the first-passage-time problem for the Jacobi process, we found closed-form expressions for mean, variance, and third moment that are easy to implement numerically. The first two moments are used here to determine the role played by the parameters of the neuronal model; namely, the effect of multiplicative noise on the output of the Jacobi neuronal model with input-dependent parameters is examined in detail and compared with the properties of the generic Jacobi diffusion. It appears that the dependence of the model parameters on the rate of inhibition turns out to be of primary importance to observe a change in the slope of the response curves. This dependence also affects the variability of the output as reflected by the coefficient of variation. It often takes values larger than one, and it is not always a monotonic function in dependency on the rate of excitation.

5.
Biosystems ; 161: 41-45, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28756162

ABSTRACT

We investigated the estimation accuracy of synaptic conductances by analyzing simulated voltage traces generated by a Hodgkin-Huxley type model. We show that even a single spike substantially deteriorates the estimation. We also demonstrate that two approaches, namely, negative current injection and spike removal, can ameliorate this deterioration.


Subject(s)
Models, Neurological , Neurons/physiology , Synapses/physiology , Action Potentials , Computer Simulation , Humans
6.
Phys Rev E ; 95(2-1): 022310, 2017 Feb.
Article in English | MEDLINE | ID: mdl-28297875

ABSTRACT

It is widely accepted that neuronal firing rates contain a significant amount of information about the stimulus intensity. Nevertheless, theoretical studies on the coding accuracy inferred from the exact spike counting distributions are rare. We present an analysis based on the number of observed spikes assuming the stochastic perfect integrate-and-fire model with a change point, representing the stimulus onset, for which we calculate the corresponding Fisher information to investigate the accuracy of rate coding. We analyze the effect of changing the duration of the time window and the influence of several parameters of the model, in particular the level of the presynaptic spontaneous activity and the level of random fluctuation of the membrane potential, which can be interpreted as noise of the system. The results show that the Fisher information is nonmonotonic with respect to the length of the observation period. This counterintuitive result is caused by the discrete nature of the count of spikes. We observe also that the signal can be enhanced by noise, since the Fisher information is nonmonotonic with respect to the level of spontaneous activity and, in some cases, also with respect to the level of fluctuation of the membrane potential.

7.
Neural Comput ; 28(10): 2162-80, 2016 10.
Article in English | MEDLINE | ID: mdl-27557098

ABSTRACT

The time to the first spike after stimulus onset typically varies with the stimulation intensity. Experimental evidence suggests that neural systems use such response latency to encode information about the stimulus. We investigate the decoding accuracy of the latency code in relation to the level of noise in the form of presynaptic spontaneous activity. Paradoxically, the optimal performance is achieved at a nonzero level of noise and suprathreshold stimulus intensities. We argue that this phenomenon results from the influence of the spontaneous activity on the stabilization of the membrane potential in the absence of stimulation. The reported decoding accuracy improvement represents a novel manifestation of the noise-aided signal enhancement.

8.
Biol Cybern ; 110(2-3): 193-200, 2016 06.
Article in English | MEDLINE | ID: mdl-27246170

ABSTRACT

Statistical properties of spike trains as well as other neurophysiological data suggest a number of mathematical models of neurons. These models range from entirely descriptive ones to those deduced from the properties of the real neurons. One of them, the diffusion leaky integrate-and-fire neuronal model, which is based on the Ornstein-Uhlenbeck (OU) stochastic process that is restricted by an absorbing barrier, can describe a wide range of neuronal activity in terms of its parameters. These parameters are readily associated with known physiological mechanisms. The other model is descriptive, Gamma renewal process, and its parameters only reflect the observed experimental data or assumed theoretical properties. Both of these commonly used models are related here. We show under which conditions the Gamma model is an output from the diffusion OU model. In some cases, we can see that the Gamma distribution is unrealistic to be achieved for the employed parameters of the OU process.


Subject(s)
Diffusion , Models, Neurological , Neurons , Cybernetics , Stochastic Processes
9.
Math Biosci Eng ; 13(3): i, 2016 06 01.
Article in English | MEDLINE | ID: mdl-27106178

ABSTRACT

This Special Issue of Mathematical Biosciences and Engineering contains 11 selected papers presented at the Neural Coding 2014 workshop. The workshop was held in the royal city of Versailles in France, October 6-10, 2014. This was the 11th of a series of international workshops on this subject, the first held in Prague (1995), then Versailles (1997), Osaka (1999), Plymouth (2001), Aulla (2003), Marburg (2005), Montevideo (2007), Tainan (2009), Limassol (2010), and again in Prague (2012). Also selected papers from Prague were published as a special issue of Mathematical Biosciences and Engineering and in this way a tradition was started. Similarly to the previous workshops, this was a single track multidisciplinary event bringing together experimental and computational neuroscientists. The Neural Coding Workshops are traditionally biennial symposia. They are relatively small in size, interdisciplinary with major emphasis on the search for common principles in neural coding. The workshop was conceived to bring together scientists from different disciplines for an in-depth discussion of mathematical model-building and computational strategies. Further information on the meeting can be found at the NC2014 website at https://colloque6.inra.fr/neural_coding_2014. The meeting was supported by French National Institute for Agricultural Research, the world's leading institution in this field. This Special Issue of Mathematical Biosciences and Engineering contains 11 selected papers presented at the Neural Coding 2014 workshop. The workshop was held in the royal city of Versailles in France, October 6-10, 2014. This was the 11th of a series of international workshops on this subject, the first held in Prague (1995), then Versailles (1997), Osaka (1999), Plymouth (2001), Aulla (2003), Marburg (2005), Montevideo (2007), Tainan (2009), Limassol (2010), and again in Prague (2012). Also selected papers from Prague were published as a special issue of Mathematical Biosciences and Engineering and in this way a tradition was started. Similarly to the previous workshops, this was a single track multidisciplinary event bringing together experimental and computational neuroscientists. The Neural Coding Workshops are traditionally biennial symposia. They are relatively small in size, interdisciplinary with major emphasis on the search for common principles in neural coding. The workshop was conceived to bring together scientists from different disciplines for an in-depth discussion of mathematical model-building and computational strategies. Further information on the meeting can be found at the NC2014 website at https://colloque6.inra.fr/neural_coding_2014. The meeting was supported by French National Institute for Agricultural Research, the world's leading institution in this field. Understanding how the brain processes information is one of the most challenging subjects in neuroscience. The papers presented in this special issue show a small corner of the huge diversity of this field, and illustrate how scientists with different backgrounds approach this vast subject. The diversity of disciplines engaged in these investigations is remarkable: biologists, mathematicians, physicists, psychologists, computer scientists, and statisticians, all have original tools and ideas by which to try to elucidate the underlying mechanisms. In this issue, emphasis is put on mathematical modeling of single neurons. A variety of problems in computational neuroscience accompanied with a rich diversity of mathematical tools and approaches are presented. We hope it will inspire and challenge the readers in their own research. We would like to thank the authors for their valuable contributions and the referees for their priceless effort of reviewing the manuscripts. Finally, we would like to thank Yang Kuang for supporting us and making this publication possible.


Subject(s)
Models, Theoretical , Neurosciences , Education , Research
10.
Sci Rep ; 6: 23810, 2016 Mar 29.
Article in English | MEDLINE | ID: mdl-27021783

ABSTRACT

Sensory neurons are often reported to adjust their coding accuracy to the stimulus statistics. The observed match is not always perfect and the maximal accuracy does not align with the most frequent stimuli. As an alternative to a physiological explanation we show that the match critically depends on the chosen stimulus measurement scale. More generally, we argue that if we measure the stimulus intensity on the scale which is proportional to the perception intensity, an improved adjustment in the coding accuracy is revealed. The unique feature of stimulus units based on the psychophysical scale is that the coding accuracy can be meaningfully compared for different stimuli intensities, unlike in the standard case of a metric scale.


Subject(s)
Adaptation, Physiological/physiology , Algorithms , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Acoustic Stimulation , Animals , Evoked Potentials/physiology , Humans , Nerve Net/cytology , Physical Stimulation , Psychophysics
11.
Article in English | MEDLINE | ID: mdl-26651674

ABSTRACT

A variability measure of the times of uniform events based on a shot-noise process is proposed and studied. The measure is inspired by the Fano factor, which we generalize by considering the time-weighted influence of the events given by a shot-noise response function. The sequence of events is assumed to be an equilibrium renewal process, and based on this assumption we present formulas describing the behavior of the variability measure. The formulas are derived for a general response function, restricted only by some natural conditions, but the main focus is given to the shot noise with exponential decrease. The proposed measure is analyzed and compared with the Fano factor.

12.
Eur J Pharm Sci ; 78: 171-6, 2015 Oct 12.
Article in English | MEDLINE | ID: mdl-26215461

ABSTRACT

The aim is to determine how well the parameters of the Weibull model of dissolution can be estimated in dependency on the chosen times to measure the empirical data. The approach is based on the theory of Fisher information. We show that in order to obtain the best estimates the data should be collected at time instants when tablets actively dissolve or at their close proximity. This is in a sharp contrast with commonly used experimental protocols when sampling times are distributed rather uniformly.


Subject(s)
Models, Theoretical , Salicylic Acid/chemistry , Acrylates/chemistry , Mannitol/chemistry , Solubility , Time Factors
13.
Biosystems ; 136: 113-9, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26151393

ABSTRACT

Tinnitus is one of the leading disorders of hearing with no effective cure as its pathophysiological mechanisms remain unclear. While the sensitivity to sound is well-known to be affected, exactly how intensity coding per se is altered remains unclear. To address this issue, we used a salicylate-overdose animal model of tinnitus to measure auditory cortical evoked potentials at various stimulus levels, and analyzed on single-trial basis the response strength and its variance for the computation of the lower bound of Fisher information. Based on Fisher information profiles, we compared the precision or efficiency of intensity coding before and after salicylate-treatment. We found that after salicylate treatment, intensity coding was unexpectedly improved, rather than impaired. Also, the improvement varied in a sound-dependent way. The observed changes are likely due to some central compensatory mechanisms that are activated during tinnitus to bring out the full capacity of intensity coding which is expressed only in part under normal conditions.


Subject(s)
Acoustic Stimulation/methods , Evoked Potentials, Auditory , Information Storage and Retrieval/methods , Loudness Perception , Tinnitus/physiopathology , Animals , Rats , Rats, Sprague-Dawley , Salicylates , Tinnitus/chemically induced
14.
Article in English | MEDLINE | ID: mdl-26042025

ABSTRACT

To understand information processing in neuronal circuits, it is important to infer how a sensory stimulus impacts on the synaptic input to a neuron. An increase in neuronal firing during the stimulation results from pure excitation or from a combination of excitation and inhibition. Here, we develop a method for estimating the rates of the excitatory and inhibitory synaptic inputs from a membrane voltage trace of a neuron. The method is based on a modified Ornstein-Uhlenbeck neuronal model, which aims to describe the stimulation effects on the synaptic input. The method is tested using a single-compartment neuron model with a realistic description of synaptic inputs, and it is applied to an intracellular voltage trace recorded from an auditory neuron in vivo. We find that the excitatory and inhibitory inputs increase during stimulation, suggesting that the acoustic stimuli are encoded by a combination of excitation and inhibition.

15.
J Neural Eng ; 12(3): 036012, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25946561

ABSTRACT

OBJECTIVE: One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. APPROACH: In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. MAIN RESULTS: We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. SIGNIFICANCE: We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.


Subject(s)
Action Potentials/physiology , Algorithms , Evoked Potentials/physiology , Models, Neurological , Neurons/physiology , Perception/physiology , Animals , Computer Simulation , Humans , Reproducibility of Results , Sensitivity and Specificity
16.
Biosystems ; 136: 23-34, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25939679

ABSTRACT

Neuronal response latency is usually vaguely defined as the delay between the stimulus onset and the beginning of the response. It contains important information for the understanding of the temporal code. For this reason, the detection of the response latency has been extensively studied in the last twenty years, yielding different estimation methods. They can be divided into two classes, one of them including methods based on detecting an intensity change in the firing rate profile after the stimulus onset and the other containing methods based on detection of spikes evoked by the stimulation using interspike intervals and spike times. The aim of this paper is to present a review of the main techniques proposed in both classes, highlighting their advantages and shortcomings.


Subject(s)
Action Potentials/physiology , Algorithms , Evoked Potentials/physiology , Models, Neurological , Neurons/physiology , Reaction Time/physiology , Animals , Computer Simulation , Humans , Models, Statistical , Nerve Net/physiology
17.
Biol Cybern ; 109(3): 389-99, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25910437

ABSTRACT

The input of Stein's model of a single neuron is usually described by using a Poisson process, which is assumed to represent the behaviour of spikes pooled from a large number of presynaptic spike trains. However, such a description of the input is not always appropriate as the variability cannot be separated from the intensity. Therefore, we create and study Stein's model with a more general input, a sum of equilibrium renewal processes. The mean and variance of the membrane potential are derived for this model. Using these formulas and numerical simulations, the model is analyzed to study the influence of the input variability on the properties of the membrane potential and the output spike trains. The generalized Stein's model is compared with the original Stein's model with Poissonian input using the relative difference of variances of membrane potential at steady state and the integral square error of output interspike intervals. Both of the criteria show large differences between the models for input with high variability.


Subject(s)
Membrane Potentials/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Animals , Humans , Stochastic Processes
18.
Neural Comput ; 27(5): 1051-7, 2015 May.
Article in English | MEDLINE | ID: mdl-25710092

ABSTRACT

It is automatically assumed that the accuracy with which a stimulus can be decoded is entirely determined by the properties of the neuronal system. We challenge this perspective by showing that the identification of pure tone intensities in an auditory nerve fiber depends on both the stochastic response model and the arbitrarily chosen stimulus units. We expose an apparently paradoxical situation in which it is impossible to decide whether loud or quiet tones are encoded more precisely. Our conclusion reaches beyond the topic of auditory neuroscience, however, as we show that the choice of stimulus scale is an integral part of the neural coding problem and not just a matter of convenience.


Subject(s)
Algorithms , Cochlear Nerve/physiology , Loudness Perception/physiology , Models, Neurological , Nerve Fibers/physiology , Acoustic Stimulation/methods , Computer Simulation/statistics & numerical data , Humans , Neural Conduction/physiology , Stochastic Processes
19.
Biol Cybern ; 108(4): 475-93, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24962079

ABSTRACT

Stimulus response latency is the time period between the presentation of a stimulus and the occurrence of a change in the neural firing evoked by the stimulation. The response latency has been explored and estimation methods proposed mostly for excitatory stimuli, which means that the neuron reacts to the stimulus by an increase in the firing rate. We focus on the estimation of the response latency in the case of inhibitory stimuli. Models used in this paper represent two different descriptions of response latency. We consider either the latency to be constant across trials or to be a random variable. In the case of random latency, special attention is given to models with selective interaction. The aim is to propose methods for estimation of the latency or the parameters of its distribution. Parameters are estimated by four different methods: method of moments, maximum-likelihood method, a method comparing an empirical and a theoretical cumulative distribution function and a method based on the Laplace transform of a probability density function. All four methods are applied on simulated data and compared.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neural Inhibition/physiology , Neurons/physiology , Reaction Time/physiology , Afferent Pathways/physiology , Computer Simulation , Humans , Models, Statistical , Physical Stimulation , Time Factors
20.
Math Biosci Eng ; 11(1): 105-23, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24245675

ABSTRACT

Fano factor is one of the most widely used measures of variability of spike trains. Its standard estimator is the ratio of sample variance to sample mean of spike counts observed in a time window and the quality of the estimator strongly depends on the length of the window. We investigate this dependence under the assumption that the spike train behaves as an equilibrium renewal process. It is shown what characteristics of the spike train have large effect on the estimator bias. Namely, the effect of refractory period is analytically evaluated. Next, we create an approximate asymptotic formula for the mean square error of the estimator, which can also be used to find minimum of the error in estimation from single spike trains. The accuracy of the Fano factor estimator is compared with the accuracy of the estimator based on the squared coefficient of variation. All the results are illustrated for spike trains with gamma and inverse Gaussian probability distributions of interspike intervals. Finally, we discuss possibilities of how to select a suitable observation window for the Fano factor estimation.


Subject(s)
Action Potentials/physiology , Neurons/metabolism , Neurons/physiology , Algorithms , Humans , Models, Neurological , Models, Theoretical , Normal Distribution , Poisson Distribution , Probability , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...