Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Cell Syst ; 14(3): 177-179, 2023 03 15.
Article in English | MEDLINE | ID: mdl-36924765

ABSTRACT

Modeling systems at multiple interacting scales is probably the most relevant task for pursuing a physically motivated explanation of biological regulation. In a new study, Smart and Zilman develop a convincing, albeit preliminary, model of the interplay between the cell microscale and the macroscopic tissue organization in biological systems.


Subject(s)
Models, Biological
3.
PLoS Comput Biol ; 18(8): e1009393, 2022 08.
Article in English | MEDLINE | ID: mdl-35930590

ABSTRACT

We postulate that three fundamental elements underlie a decision making process: perception of time passing, information processing in multiple timescales and reward maximisation. We build a simple reinforcement learning agent upon these principles that we train on a random dot-like task. Our results, similar to the experimental data, demonstrate three emerging signatures. (1) signal neutrality: insensitivity to the signal coherence in the interval preceding the decision. (2) Scalar property: the mean of the response times varies widely for different signal coherences, yet the shape of the distributions stays almost unchanged. (3) Collapsing boundaries: the "effective" decision-making boundary changes over time in a manner reminiscent of the theoretical optimal. Removing the perception of time or the multiple timescales from the model does not preserve the distinguishing signatures. Our results suggest an alternative explanation for signal neutrality. We propose that it is not part of motor planning. It is part of the decision-making process and emerges from information processing on multiple timescales.


Subject(s)
Decision Making , Learning , Decision Making/physiology , Reaction Time/physiology , Reinforcement, Psychology , Reward
4.
Entropy (Basel) ; 25(1)2022 Dec 23.
Article in English | MEDLINE | ID: mdl-36673162

ABSTRACT

All-cause mortality is a very coarse grain, albeit very reliable, index to check the health implications of lifestyle determinants, systemic threats and socio-demographic factors. In this work, we adopt a statistical-mechanics approach to the analysis of temporal fluctuations of all-cause mortality, focusing on the correlation structure of this index across different regions of Italy. The correlation network among the 20 Italian regions was reconstructed using temperature oscillations and traveller flux (as a function of distance and region's attractiveness, based on GDP), allowing for a separation between infective and non-infective death causes. The proposed approach allows monitoring of emerging systemic threats in terms of anomalies of correlation network structure.

5.
Sci Rep ; 8(1): 17056, 2018 11 19.
Article in English | MEDLINE | ID: mdl-30451957

ABSTRACT

Inference methods are widely used to recover effective models from observed data. However, few studies attempted to investigate the dynamics of inferred models in neuroscience, and none, to our knowledge, at the network level. We introduce a principled modification of a widely used generalized linear model (GLM), and learn its structural and dynamic parameters from in-vitro spike data. The spontaneous activity of the new model captures prominent features of the non-stationary and non-linear dynamics displayed by the biological network, where the reference GLM largely fails, and also reflects fine-grained spatio-temporal dynamical features. Two ingredients were key for success. The first is a saturating transfer function: beyond its biological plausibility, it limits the neuron's information transfer, improving robustness against endogenous and external noise. The second is a super-Poisson spikes generative mechanism; it accounts for the undersampling of the network, and allows the model neuron to flexibly incorporate the observed activity fluctuations.


Subject(s)
Action Potentials , Neural Networks, Computer , Computer Simulation , Poisson Distribution
6.
PLoS One ; 12(4): e0174918, 2017.
Article in English | MEDLINE | ID: mdl-28369106

ABSTRACT

Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the 'mean-shift' algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters' centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network's state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.


Subject(s)
Brain Waves/physiology , Learning/physiology , Memory/physiology , Models, Neurological , Nerve Net/physiology , Synapses/physiology , Action Potentials/physiology , Algorithms , Animals , Brain Mapping/methods , Humans , Neurons/metabolism
7.
PLoS Comput Biol ; 11(11): e1004547, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26558616

ABSTRACT

Cortical networks, in-vitro as well as in-vivo, can spontaneously generate a variety of collective dynamical events such as network spikes, UP and DOWN states, global oscillations, and avalanches. Though each of them has been variously recognized in previous works as expression of the excitability of the cortical tissue and the associated nonlinear dynamics, a unified picture of the determinant factors (dynamical and architectural) is desirable and not yet available. Progress has also been partially hindered by the use of a variety of statistical measures to define the network events of interest. We propose here a common probabilistic definition of network events that, applied to the firing activity of cultured neural networks, highlights the co-occurrence of network spikes, power-law distributed avalanches, and exponentially distributed 'quasi-orbits', which offer a third type of collective behavior. A rate model, including synaptic excitation and inhibition with no imposed topology, synaptic short-term depression, and finite-size noise, accounts for all these different, coexisting phenomena. We find that their emergence is largely regulated by the proximity to an oscillatory instability of the dynamics, where the non-linear excitable behavior leads to a self-amplification of activity fluctuations over a wide range of scales in space and time. In this sense, the cultured network dynamics is compatible with an excitation-inhibition balance corresponding to a slightly sub-critical regime. Finally, we propose and test a method to infer the characteristic time of the fatigue process, from the observed time course of the network's firing rate. Unlike the model, possessing a single fatigue mechanism, the cultured network appears to show multiple time scales, signalling the possible coexistence of different fatigue mechanisms.


Subject(s)
Cerebral Cortex/cytology , Cerebral Cortex/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Action Potentials/physiology , Animals , Animals, Newborn , Cell Culture Techniques/instrumentation , Cell Culture Techniques/methods , Cells, Cultured , Computational Biology , Computer Simulation , Electrodes , Rats
8.
PLoS One ; 10(3): e0118412, 2015.
Article in English | MEDLINE | ID: mdl-25807389

ABSTRACT

Biological networks display a variety of activity patterns reflecting a web of interactions that is complex both in space and time. Yet inference methods have mainly focused on reconstructing, from the network's activity, the spatial structure, by assuming equilibrium conditions or, more recently, a probabilistic dynamics with a single arbitrary time-step. Here we show that, under this latter assumption, the inference procedure fails to reconstruct the synaptic matrix of a network of integrate-and-fire neurons when the chosen time scale of interaction does not closely match the synaptic delay or when no single time scale for the interaction can be identified; such failure, moreover, exposes a distinctive bias of the inference method that can lead to infer as inhibitory the excitatory synapses with interaction time scales longer than the model's time-step. We therefore introduce a new two-step method, that first infers through cross-correlation profiles the delay-structure of the network and then reconstructs the synaptic matrix, and successfully test it on networks with different topologies and in different activity regimes. Although step one is able to accurately recover the delay-structure of the network, thus getting rid of any a priori guess about the time scales of the interaction, the inference method introduces nonetheless an arbitrary time scale, the time-bin dt used to binarize the spike trains. We therefore analytically and numerically study how the choice of dt affects the inference in our network model, finding that the relationship between the inferred couplings and the real synaptic efficacies, albeit being quadratic in both cases, depends critically on dt for the excitatory synapses only, whilst being basically independent of it for the inhibitory ones.


Subject(s)
Computer Simulation , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Synapses/physiology , Action Potentials/physiology , Neural Inhibition/physiology
9.
J Neurophysiol ; 108(11): 3124-37, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22972954

ABSTRACT

We model the putative neuronal and synaptic mechanisms involved in learning a visual categorization task, taking inspiration from single-cell recordings in inferior temporal cortex (ITC). Our working hypothesis is that learning the categorization task involves both bottom-up, ITC to prefrontal cortex (PFC), and top-down (PFC to ITC) synaptic plasticity and that the latter enhances the selectivity of the ITC neurons encoding the task-relevant features of the stimuli, thereby improving the signal-to-noise ratio. We test this hypothesis by modeling both areas and their connections with spiking neurons and plastic synapses, ITC acting as a feature-selective layer and PFC as a category coding layer. This minimal model gives interesting clues as to properties and function of the selective feedback signal from PFC to ITC that help solving a categorization task. In particular, we show that, when the stimuli are very noisy because of a large number of nonrelevant features, the feedback structure helps getting better categorization performance and decreasing the reaction time. It also affects the speed and stability of the learning process and sharpens tuning curves of ITC neurons. Furthermore, the model predicts a modulation of neural activities during error trials, by which the differential selectivity of ITC neurons to task-relevant and task-irrelevant features diminishes or is even reversed, and modulations in the time course of neural activities that appear when, after learning, corrupted versions of the stimuli are input to the network.


Subject(s)
Learning/physiology , Models, Neurological , Visual Perception/physiology , Feedback , Frontal Lobe/physiology , Humans , Nerve Net/physiology , Neuronal Plasticity , Neurons , Signal-To-Noise Ratio , Synapses , Temporal Lobe/physiology
10.
PLoS Comput Biol ; 5(7): e1000430, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19593372

ABSTRACT

We propose a novel explanation for bistable perception, namely, the collective dynamics of multiple neural populations that are individually meta-stable. Distributed representations of sensory input and of perceptual state build gradually through noise-driven transitions in these populations, until the competition between alternative representations is resolved by a threshold mechanism. The perpetual repetition of this collective race to threshold renders perception bistable. This collective dynamics - which is largely uncoupled from the time-scales that govern individual populations or neurons - explains many hitherto puzzling observations about bistable perception: the wide range of mean alternation rates exhibited by bistable phenomena, the consistent variability of successive dominance periods, and the stabilizing effect of past perceptual states. It also predicts a number of previously unsuspected relationships between observable quantities characterizing bistable perception. We conclude that bistable perception reflects the collective nature of neural decision making rather than properties of individual populations or neurons.


Subject(s)
Models, Neurological , Perception/physiology , Photic Stimulation , Vision, Binocular/physiology , Algorithms , Stochastic Processes , Time Factors
11.
PLoS One ; 3(7): e2534, 2008 Jul 02.
Article in English | MEDLINE | ID: mdl-18596965

ABSTRACT

The spike activity of cells in some cortical areas has been found to be correlated with reaction times and behavioral responses during two-choice decision tasks. These experimental findings have motivated the study of biologically plausible winner-take-all network models, in which strong recurrent excitation and feedback inhibition allow the network to form a categorical choice upon stimulation. Choice formation corresponds in these models to the transition from the spontaneous state of the network to a state where neurons selective for one of the choices fire at a high rate and inhibit the activity of the other neurons. This transition has been traditionally induced by an increase in the external input that destabilizes the spontaneous state of the network and forces its relaxation to a decision state. Here we explore a different mechanism by which the system can undergo such transitions while keeping the spontaneous state stable, based on an escape induced by finite-size noise from the spontaneous state. This decision mechanism naturally arises for low stimulus strengths and leads to exponentially distributed decision times when the amount of noise in the system is small. Furthermore, we show using numerical simulations that mean decision times follow in this regime an exponential dependence on the amplitude of noise. The escape mechanism provides thus a dynamical basis for the wide range and variability of decision times observed experimentally.


Subject(s)
Nerve Net , Neurons/physiology , Animals , Humans , Models, Neurological , Neural Pathways/physiology
12.
Phys Rev Lett ; 98(14): 148101, 2007 Apr 06.
Article in English | MEDLINE | ID: mdl-17501315

ABSTRACT

We study the dynamics of a noisy network of spiking neurons with spike-frequency adaptation (SFA), using a mean-field approach, in terms of a two-dimensional Fokker-Planck equation for the membrane potential of the neurons and the calcium concentration gating SFA. The long time scales of SFA allow us to use an adiabatic approximation and to describe the network as an effective nonlinear two-dimensional system. The phase diagram is computed for varying levels of SFA and synaptic coupling. Two different population-bursting regimes emerge, depending on the level of SFA in networks with noisy emission rate, due to the finite number of neurons.


Subject(s)
Action Potentials/physiology , Adaptation, Physiological , Neural Networks, Computer , Neurons/physiology
13.
Math Biosci ; 207(2): 336-51, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17367823

ABSTRACT

The dynamics of a population of integrate and fire (IF) neurons with spike-frequency adaptation (SFA) is studied. Using a population density approach and assuming a slow dynamics for the variable driving SFA, an equation for the emission rate of a finite set of uncoupled neurons is derived. The system dynamics is then analyzed in the neighborhood of its stable fixed points by linearizing the emission rate equation. The information transfer properties are then probed by perturbing the system with a sinusoidal input current: despite the low-pass properties of the dynamical variable associated with SFA, the adapting IF neuron behaves as a band-pass device and a phase-lock condition appears at a frequency related to the characteristic time constants of both neuronal and SFA dynamics. When a finite set of neurons is considered, the power spectral density of the pooled firing rates shows for intermediate omega a rich pattern of resonances. Theoretical predictions are successfully compared to numerical simulations.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neurons/physiology , Algorithms , Animals , Calcium/metabolism , Computer Simulation , Electrophysiology , Fourier Analysis , Humans , Membrane Potentials/physiology , Neocortex/physiology , Stochastic Processes , Synapses/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...