Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-24032969

ABSTRACT

Many different kinds of noise are experimentally observed in the brain. Among them, we study a model of noisy chemical synapse and obtain critical avalanches for the spatiotemporal activity of the neural network. Neurons and synapses are modeled by dynamical maps. We discuss the relevant neuronal and synaptic properties to achieve the critical state. We verify that networks of functionally excitable neurons with fast synapses present power-law avalanches, due to rebound spiking dynamics. We also discuss the measuring of neuronal avalanches by subsampling our data, shedding light on the experimental search for self-organized criticality in neural networks.


Subject(s)
Models, Neurological , Nerve Net/physiology , Synapses/physiology , Nerve Net/cytology , Neurons/cytology
2.
J Neurosci Methods ; 220(2): 116-30, 2013 Nov 15.
Article in English | MEDLINE | ID: mdl-23916623

ABSTRACT

This review gives a short historical account of the excitable maps approach for modeling neurons and neuronal networks. Some early models, due to Pasemann (1993), Chialvo (1995) and Kinouchi and Tragtenberg (1996), are compared with more recent proposals by Rulkov (2002) and Izhikevich (2003). We also review map-based schemes for electrical and chemical synapses and some recent findings as critical avalanches in map-based neural networks. We conclude with suggestions for further work in this area like more efficient maps, compartmental modeling and close dynamical comparison with conductance-based models.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Action Potentials , Animals , Humans
3.
Phys Rev E Stat Nonlin Soft Matter Phys ; 75(2 Pt 1): 021911, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17358371

ABSTRACT

We study the transient regime of type-II biophysical neuron models and determine the scaling behavior of relaxation times tau near but below the repetitive firing critical current, tau approximately or equal to C(I(c)-I)(-Delta). For both the Hodgkin-Huxley and Morris-Lecar models we find that the critical exponent is independent of the numerical integration time step and that both systems belong to the same universality class, with Delta=1/2. For appropriately chosen parameters, the FitzHugh-Nagumo model presents the same generic transient behavior, but the critical region is significantly smaller. We propose an experiment that may reveal nontrivial critical exponents in the squid axon.


Subject(s)
Action Potentials/physiology , Biological Clocks/physiology , Differential Threshold/physiology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/physiology , Adaptation, Physiological/physiology , Animals , Axons/physiology , Computer Simulation , Decapodiformes , Time Factors
4.
Phys Rev Lett ; 87(1): 010603, 2001 Jul 02.
Article in English | MEDLINE | ID: mdl-11461455

ABSTRACT

Deterministic walks over a random set of N points in one and two dimensions ( d = 1,2) are considered. Points ("cities") are randomly scattered in R(d) following a uniform distribution. A walker ("tourist"), at each time step, goes to the nearest neighbor city that has not been visited in the past tau steps. Each initial city leads to a different trajectory composed of a transient part and a final p-cycle attractor. Transient times (for d = 1,2) follow an exponential law with a tau-dependent decay time but the density of p cycles can be approximately described by D(p)proportional to p(-alpha(tau)). For tau>>1 and tau/N<<1, the exponent is independent of tau. Some analytical results are given for the d = 1 case.

5.
Article in English | MEDLINE | ID: mdl-11138138

ABSTRACT

We study pruning strategies in simple perceptrons subjected to supervised learning. Our analytical results, obtained through the statistical mechanics approach to learning theory, are independent of the learning algorithm used in the training process. We calculate the post-training distribution P(J) of synaptic weights, which depends only on the overlap rho(0) achieved by the learning algorithm before pruning and the fraction kappa of relevant weights in the teacher network. From this distribution, we calculate the optimal pruning strategy for deleting small weights. The optimal pruning threshold grows from zero as straight theta(opt)(rho(0), kappa) approximately [rho(0)-rho(c)(kappa)](1/2) above some critical value rho(c)(kappa). Thus, the elimination of weak synapses enhances the network performance only after a critical learning period. Possible implications for biological pruning phenomena are discussed.


Subject(s)
Neural Networks, Computer , Algorithms , Animals , Biophysical Phenomena , Biophysics , Brain/growth & development , Brain/physiology , Humans , Learning , Models, Neurological , Nerve Net/growth & development , Nerve Net/physiology , Synapses/physiology
6.
Article in English | MEDLINE | ID: mdl-11969450

ABSTRACT

A random-neighbor extremal stick-slip model is introduced. In the thermodynamic limit, the distribution of states has a simple analytical form and the mean avalanche size, as a function of the coupling parameter, is exactly calculable. The system is critical only at a special point J(c) in coupling parameter space. However, the critical region around this point, where approximate scale invariance holds, is very large, suggesting a mechanism for explaining the ubiquity of power laws in nature.

SELECTION OF CITATIONS
SEARCH DETAIL
...