Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
Entropy (Basel) ; 23(4)2021 Apr 12.
Article in English | MEDLINE | ID: mdl-33921298

ABSTRACT

Active inference is a normative framework for explaining behaviour under the free energy principle-a theory of self-organisation originating in neuroscience. It specifies neuronal dynamics for state-estimation in terms of a descent on (variational) free energy-a measure of the fit between an internal (generative) model and sensory observations. The free energy gradient is a prediction error-plausibly encoded in the average membrane potentials of neuronal populations. Conversely, the expected probability of a state can be expressed in terms of neuronal firing rates. We show that this is consistent with current models of neuronal dynamics and establish face validity by synthesising plausible electrophysiological responses. We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space. We compare the information length of belief updating in both schemes, a measure of the distance travelled in information space that has a direct interpretation in terms of metabolic cost. We show that neural dynamics under active inference are metabolically efficient and suggest that neural representations in biological agents may evolve by approximating steepest descent in information space towards the point of optimal inference.

2.
Neuroimage Clin ; 18: 744-752, 2018.
Article in English | MEDLINE | ID: mdl-29876263

ABSTRACT

Introduction: Attention-deficit hyperactive disorder (ADHD) is the most common neurodevelopmental disorder in children. Diagnosis is currently based on behavioral criteria, but magnetic resonance imaging (MRI) of the brain is increasingly used in ADHD research. To date however, MRI studies have provided mixed results in ADHD patients, particularly with respect to the laterality of findings. Methods: We studied 849 children and adolescents (ages 6-21 y.o.) diagnosed with ADHD (n = 341) and age-matched typically developing (TD) controls with structural brain MRI. We calculated volumetric measures from 34 cortical and 14 non-cortical brain regions per hemisphere, and detailed shape morphometry of subcortical nuclei. Diffusion tensor imaging (DTI) data were collected for a subset of 104 subjects; from these, we calculated mean diffusivity and fractional anisotropy of white matter tracts. Group comparisons were made for within-hemisphere (right/left) and between hemisphere asymmetry indices (AI) for each measure. Results: DTI mean diffusivity AI group differences were significant in cingulum, inferior and superior longitudinal fasciculus, and cortico-spinal tracts (p < 0.001) with the effect of stimulant treatment tending to reduce these patterns of asymmetry differences. Gray matter volumes were more asymmetric in medication free ADHD individuals compared to TD in twelve cortical regions and two non-cortical volumes studied (p < 0.05). Morphometric analyses revealed that caudate, hippocampus, thalamus, and amygdala were more asymmetric (p < 0.0001) in ADHD individuals compared to TD, and that asymmetry differences were more significant than lateralized comparisons. Conclusions: Brain asymmetry measures allow each individual to serve as their own control, diminishing variability between individuals and when pooling data across sites. Asymmetry group differences were more significant than lateralized comparisons between ADHD and TD subjects across morphometric, volumetric, and DTI comparisons.


Subject(s)
Attention Deficit Disorder with Hyperactivity/diagnostic imaging , Brain/diagnostic imaging , Functional Laterality/physiology , Adolescent , Child , Female , Humans , Magnetic Resonance Imaging , Male , Organ Size/physiology , Young Adult
4.
PLoS Comput Biol ; 12(3): e1004797, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26942606

ABSTRACT

Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution.


Subject(s)
Action Potentials/physiology , Brain/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Computer Simulation , Connectome/methods , Models, Statistical , Sample Size
5.
PLoS Biol ; 14(3): e1002400, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26953636

ABSTRACT

Given the amount of knowledge and data accruing in the neurosciences, is it time to formulate a general principle for neuronal dynamics that holds at evolutionary, developmental, and perceptual timescales? In this paper, we propose that the brain (and other self-organised biological systems) can be characterised via the mathematical apparatus of a gauge theory. The picture that emerges from this approach suggests that any biological system (from a neuron to an organism) can be cast as resolving uncertainty about its external milieu, either by changing its internal states or its relationship to the environment. Using formal arguments, we show that a gauge theory for neuronal dynamics--based on approximate Bayesian inference--has the potential to shed new light on phenomena that have thus far eluded a formal description, such as attention and the link between action and perception.


Subject(s)
Brain/physiology , Models, Biological , Neurons/physiology , Bayes Theorem , Feedback, Sensory
6.
J Neurosci Methods ; 257: 7-16, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26384541

ABSTRACT

BACKGROUND: Dynamic causal modeling (DCM) for fMRI is an established method for Bayesian system identification and inference on effective brain connectivity. DCM relies on a biophysical model that links hidden neuronal activity to measurable BOLD signals. Currently, biophysical simulations from DCM constitute a serious computational hindrance. Here, we present Massively Parallel Dynamic Causal Modeling (mpdcm), a toolbox designed to address this bottleneck. NEW METHOD: mpdcm delegates the generation of simulations from DCM's biophysical model to graphical processing units (GPUs). Simulations are generated in parallel by implementing a low storage explicit Runge-Kutta's scheme on a GPU architecture. mpdcm is publicly available under the GPLv3 license. RESULTS: We found that mpdcm efficiently generates large number of simulations without compromising their accuracy. As applications of mpdcm, we suggest two computationally expensive sampling algorithms: thermodynamic integration and parallel tempering. COMPARISON WITH EXISTING METHOD(S): mpdcm is up to two orders of magnitude more efficient than the standard implementation in the software package SPM. Parallel tempering increases the mixing properties of the traditional Metropolis-Hastings algorithm at low computational cost given efficient, parallel simulations of a model. CONCLUSIONS: Future applications of DCM will likely require increasingly large computational resources, for example, when the likelihood landscape of a model is multimodal, or when implementing sampling methods for multi-subject analysis. Due to the wide availability of GPUs, algorithmic advances can be readily available in the absence of access to large computer grids, or when there is a lack of expertise to implement algorithms in such grids.


Subject(s)
Brain Mapping/methods , Computer Graphics , Magnetic Resonance Imaging/methods , Models, Statistical , Signal Processing, Computer-Assisted , Software , Access to Information , Algorithms , Bayes Theorem , Brain/physiology , Cerebrovascular Circulation/physiology , Computer Simulation , Models, Neurological , Oxygen/blood , Thermodynamics
7.
Neuroimage ; 125: 1142-1154, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26220742

ABSTRACT

Seizure activity in EEG recordings can persist for hours with seizure dynamics changing rapidly over time and space. To characterise the spatiotemporal evolution of seizure activity, large data sets often need to be analysed. Dynamic causal modelling (DCM) can be used to estimate the synaptic drivers of cortical dynamics during a seizure; however, the requisite (Bayesian) inversion procedure is computationally expensive. In this note, we describe a straightforward procedure, within the DCM framework, that provides efficient inversion of seizure activity measured with non-invasive and invasive physiological recordings; namely, EEG/ECoG. We describe the theoretical background behind a Bayesian belief updating scheme for DCM. The scheme is tested on simulated and empirical seizure activity (recorded both invasively and non-invasively) and compared with standard Bayesian inversion. We show that the Bayesian belief updating scheme provides similar estimates of time-varying synaptic parameters, compared to standard schemes, indicating no significant qualitative change in accuracy. The difference in variance explained was small (less than 5%). The updating method was substantially more efficient, taking approximately 5-10min compared to approximately 1-2h. Moreover, the setup of the model under the updating scheme allows for a clear specification of how neuronal variables fluctuate over separable timescales. This method now allows us to investigate the effect of fast (neuronal) activity on slow fluctuations in (synaptic) parameters, paving a way forward to understand how seizure activity is generated.


Subject(s)
Brain/physiopathology , Models, Neurological , Seizures/physiopathology , Bayes Theorem , Electroencephalography , Humans
8.
Neuroimage ; 125: 1107-1118, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26213349

ABSTRACT

In this technical note, we derive two MCMC (Markov chain Monte Carlo) samplers for dynamic causal models (DCMs). Specifically, we use (a) Hamiltonian MCMC (HMC-E) where sampling is simulated using Hamilton's equation of motion and (b) Langevin Monte Carlo algorithm (LMC-R and LMC-E) that simulates the Langevin diffusion of samples using gradients either on a Euclidean (E) or on a Riemannian (R) manifold. While LMC-R requires minimal tuning, the implementation of HMC-E is heavily dependent on its tuning parameters. These parameters are therefore optimised by learning a Gaussian process model of the time-normalised sample correlation matrix. This allows one to formulate an objective function that balances tuning parameter exploration and exploitation, furnishing an intervention-free inference scheme. Using neural mass models (NMMs)-a class of biophysically motivated DCMs-we find that HMC-E is statistically more efficient than LMC-R (with a Riemannian metric); yet both gradient-based samplers are far superior to the random walk Metropolis algorithm, which proves inadequate to steer away from dynamical instability.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Models, Theoretical , Neuroimaging/methods , Bayes Theorem , Humans , Markov Chains , Monte Carlo Method
9.
Neuroimage ; 118: 508-19, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26032883

ABSTRACT

We characterised the pathophysiology of seizure onset in terms of slow fluctuations in synaptic efficacy using EEG in patients with anti-N-methyl-d-aspartate receptor (NMDA-R) encephalitis. EEG recordings were obtained from two female patients with anti-NMDA-R encephalitis with recurrent partial seizures (ages 19 and 31). Focal electrographic seizure activity was localised using an empirical Bayes beamformer. The spectral density of reconstructed source activity was then characterised with dynamic causal modelling (DCM). Eight models were compared for each patient, to evaluate the relative contribution of changes in intrinsic (excitatory and inhibitory) connectivity and endogenous afferent input. Bayesian model comparison established a role for changes in both excitatory and inhibitory connectivity during seizure activity (in addition to changes in the exogenous input). Seizures in both patients were associated with a sequence of changes in inhibitory and excitatory connectivity; a transient increase in inhibitory connectivity followed by a transient increase in excitatory connectivity and a final peak of excitatory-inhibitory balance at seizure offset. These systematic fluctuations in excitatory and inhibitory gain may be characteristic of (anti NMDA-R encephalitis) seizures. We present these results as a case study and replication to motivate analyses of larger patient cohorts, to see whether our findings generalise and further characterise the mechanisms of seizure activity in anti-NMDA-R encephalitis.


Subject(s)
Anti-N-Methyl-D-Aspartate Receptor Encephalitis/complications , Anti-N-Methyl-D-Aspartate Receptor Encephalitis/physiopathology , Models, Neurological , Seizures/etiology , Seizures/physiopathology , Adult , Electroencephalography , Female , Humans , Signal Processing, Computer-Assisted , Young Adult
10.
J R Soc Interface ; 12(105)2015 Apr 06.
Article in English | MEDLINE | ID: mdl-25788538

ABSTRACT

Understanding how organisms establish their form during embryogenesis and regeneration represents a major knowledge gap in biological pattern formation. It has been recently suggested that morphogenesis could be understood in terms of cellular information processing and the ability of cell groups to model shape. Here, we offer a proof of principle that self-assembly is an emergent property of cells that share a common (genetic and epigenetic) model of organismal form. This behaviour is formulated in terms of variational free-energy minimization-of the sort that has been used to explain action and perception in neuroscience. In brief, casting the minimization of thermodynamic free energy in terms of variational free energy allows one to interpret (the dynamics of) a system as inferring the causes of its inputs-and acting to resolve uncertainty about those causes. This novel perspective on the coordination of migration and differentiation of cells suggests an interpretation of genetic codes as parametrizing a generative model-predicting the signals sensed by cells in the target morphology-and epigenetic processes as the subsequent inversion of that model. This theoretical formulation may complement bottom-up strategies-that currently focus on molecular pathways-with (constructivist) top-down approaches that have proved themselves in neuroscience and cybernetics.


Subject(s)
Body Patterning/physiology , Cell Differentiation/physiology , Cell Movement/physiology , Energy Metabolism/physiology , Models, Biological , Morphogenesis/physiology , Regeneration/physiology , Computer Simulation , Species Specificity
11.
Neuroimage ; 112: 375-381, 2015 May 15.
Article in English | MEDLINE | ID: mdl-25776212

ABSTRACT

In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler).


Subject(s)
Image Processing, Computer-Assisted/statistics & numerical data , Markov Chains , Models, Neurological , Monte Carlo Method , Algorithms , Bayes Theorem , Humans , Image Processing, Computer-Assisted/methods , Software , Walking/physiology
13.
PLoS Comput Biol ; 10(1): e1003439, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24465197

ABSTRACT

Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na(+) and K(+) channels, with generator potential and graded potential models lacking voltage-gated Na(+) channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na(+) channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a 'footprint' in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.


Subject(s)
Action Potentials/physiology , Energy Metabolism/physiology , Models, Neurological , Neurons/physiology , Potassium Channels, Voltage-Gated/metabolism , Voltage-Gated Sodium Channels/metabolism , Algorithms , Cell Size , Electrophysiological Phenomena , Humans , Linear Models , Nerve Net , Stochastic Processes
14.
Front Neural Circuits ; 7: 190, 2013.
Article in English | MEDLINE | ID: mdl-24367295

ABSTRACT

Sensory receptors determine the type and the quantity of information available for perception. Here, we quantified and characterized the information transferred by primary afferents in the rat whisker system using neural system identification. Quantification of "how much" information is conveyed by primary afferents, using the direct method (DM), a classical information theoretic tool, revealed that primary afferents transfer huge amounts of information (up to 529 bits/s). Information theoretic analysis of instantaneous spike-triggered kinematic stimulus features was used to gain functional insight on "what" is coded by primary afferents. Amongst the kinematic variables tested--position, velocity, and acceleration--primary afferent spikes encoded velocity best. The other two variables contributed to information transfer, but only if combined with velocity. We further revealed three additional characteristics that play a role in information transfer by primary afferents. Firstly, primary afferent spikes show preference for well separated multiple stimuli (i.e., well separated sets of combinations of the three instantaneous kinematic variables). Secondly, neurons are sensitive to short strips of the stimulus trajectory (up to 10 ms pre-spike time), and thirdly, they show spike patterns (precise doublet and triplet spiking). In order to deal with these complexities, we used a flexible probabilistic neuron model fitting mixtures of Gaussians to the spike triggered stimulus distributions, which quantitatively captured the contribution of the mentioned features and allowed us to achieve a full functional analysis of the total information rate indicated by the DM. We found that instantaneous position, velocity, and acceleration explained about 50% of the total information rate. Adding a 10 ms pre-spike interval of stimulus trajectory achieved 80-90%. The final 10-20% were found to be due to non-linear coding by spike bursts.


Subject(s)
Action Potentials/physiology , Sensory Receptor Cells/physiology , Vibrissae/physiology , Afferent Pathways/physiology , Animals , Female , Rats , Rats, Sprague-Dawley
15.
PLoS Comput Biol ; 9(10): e1003263, 2013.
Article in English | MEDLINE | ID: mdl-24098105

ABSTRACT

A balance between excitatory and inhibitory synaptic currents is thought to be important for several aspects of information processing in cortical neurons in vivo, including gain control, bandwidth and receptive field structure. These factors will affect the firing rate of cortical neurons and their reliability, with consequences for their information coding and energy consumption. Yet how balanced synaptic currents contribute to the coding efficiency and energy efficiency of cortical neurons remains unclear. We used single compartment computational models with stochastic voltage-gated ion channels to determine whether synaptic regimes that produce balanced excitatory and inhibitory currents have specific advantages over other input regimes. Specifically, we compared models with only excitatory synaptic inputs to those with equal excitatory and inhibitory conductances, and stronger inhibitory than excitatory conductances (i.e. approximately balanced synaptic currents). Using these models, we show that balanced synaptic currents evoke fewer spikes per second than excitatory inputs alone or equal excitatory and inhibitory conductances. However, spikes evoked by balanced synaptic inputs are more informative (bits/spike), so that spike trains evoked by all three regimes have similar information rates (bits/s). Consequently, because spikes dominate the energy consumption of our computational models, approximately balanced synaptic currents are also more energy efficient than other synaptic regimes. Thus, by producing fewer, more informative spikes approximately balanced synaptic currents in cortical neurons can promote both coding efficiency and energy efficiency.


Subject(s)
Models, Neurological , Neurons/physiology , Synapses/metabolism , Synapses/physiology , Synaptic Transmission/physiology , Action Potentials/physiology , Cerebral Cortex/cytology , Cerebral Cortex/physiology , Computer Simulation , Humans
16.
PLoS Comput Biol ; 9(7): e1003157, 2013.
Article in English | MEDLINE | ID: mdl-23935475

ABSTRACT

In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components--like genetic circuits, biochemical cascades, and ion channels, among others--enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode--with almost 20-60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.


Subject(s)
Nervous System Physiological Phenomena , Action Potentials , Humans , Signal Transduction , Thermodynamics
17.
Article in English | MEDLINE | ID: mdl-23979192

ABSTRACT

The pore of sodium channels contains a selectivity filter made of 4 amino acids, D/E/K/A. In voltage sensitive sodium channel (Nav) channels from jellyfish to human the fourth amino acid is Ala. This Ala, when mutated to Asp, promotes slow inactivation. In some Nav channels of pufferfishes, the Ala is replaced with Gly. We studied the biophysical properties of an Ala-to-Gly substitution (A1529G) in rat Nav1.4 channel expressed in Xenopus oocytes alone or with a ß1 subunit. The Ala-to-Gly substitution does not affect monovalent cation selectivity and positively shifts the voltage-dependent inactivation curve, although co-expression with a ß1 subunit eliminates the difference between A1529G and WT. There is almost no difference in channel fast inactivation, but the ß1 subunit accelerates WT current inactivation significantly more than it does the A1529G channels. The Ala-to-Gly substitution mainly influences the rate of recovery from slow inactivation. Again, the ß1 subunit is less effective on speeding recovery of A1529G than the WT. We searched Nav channels in numerous databases and noted at least four other independent Ala-to-Gly substitutions in Nav channels in teleost fishes. Thus, the Ala-to-Gly substitution occurs more frequently than previously realized, possibly under selection for alterations of channel gating.


Subject(s)
Ion Channel Gating , Muscle Proteins/metabolism , Sodium Channels/metabolism , Sodium/metabolism , Amino Acid Substitution , Animals , Computational Biology , Computer Simulation , Databases, Genetic , Fish Proteins/genetics , Fish Proteins/metabolism , Genotype , Insect Proteins/genetics , Insect Proteins/metabolism , Kinetics , Membrane Potentials , Models, Biological , Muscle Proteins/genetics , Mutagenesis, Site-Directed , Mutation , Phenotype , Rats , Sodium Channels/genetics , Xenopus
18.
J Cereb Blood Flow Metab ; 33(9): 1465-73, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23778164

ABSTRACT

Identifying the determinants of neuronal energy consumption and their relationship to information coding is critical to understanding neuronal function and evolution. Three of the main determinants are cell size, ion channel density, and stimulus statistics. Here we investigate their impact on neuronal energy consumption and information coding by comparing single-compartment spiking neuron models of different sizes with different densities of stochastic voltage-gated Na(+) and K(+) channels and different statistics of synaptic inputs. The largest compartments have the highest information rates but the lowest energy efficiency for a given voltage-gated ion channel density, and the highest signaling efficiency (bits spike(-1)) for a given firing rate. For a given cell size, our models revealed that the ion channel density that maximizes energy efficiency is lower than that maximizing information rate. Low rates of small synaptic inputs improve energy efficiency but the highest information rates occur with higher rates and larger inputs. These relationships produce a Law of Diminishing Returns that penalizes costly excess information coding capacity, promoting the reduction of cell size, channel density, and input stimuli to the minimum possible, suggesting that the trade-off between energy and information has influenced all aspects of neuronal anatomy and physiology.


Subject(s)
Cell Size , Energy Metabolism/physiology , Models, Neurological , Neurons , Potassium Channels, Voltage-Gated/metabolism , Voltage-Gated Sodium Channels/metabolism , Animals , Humans , Neurons/cytology , Neurons/metabolism
19.
PLoS Comput Biol ; 6: e1000840, 2010 Jul 01.
Article in English | MEDLINE | ID: mdl-20617202

ABSTRACT

The initiation and propagation of action potentials (APs) places high demands on the energetic resources of neural tissue. Each AP forces ATP-driven ion pumps to work harder to restore the ionic concentration gradients, thus consuming more energy. Here, we ask whether the ionic currents underlying the AP can be predicted theoretically from the principle of minimum energy consumption. A long-held supposition that APs are energetically wasteful, based on theoretical analysis of the squid giant axon AP, has recently been overturned by studies that measured the currents contributing to the AP in several mammalian neurons. In the single compartment models studied here, AP energy consumption varies greatly among vertebrate and invertebrate neurons, with several mammalian neuron models using close to the capacitive minimum of energy needed. Strikingly, energy consumption can increase by more than ten-fold simply by changing the overlap of the Na(+) and K(+) currents during the AP without changing the APs shape. As a consequence, the height and width of the AP are poor predictors of energy consumption. In the Hodgkin-Huxley model of the squid axon, optimizing the kinetics or number of Na(+) and K(+) channels can whittle down the number of ATP molecules needed for each AP by a factor of four. In contrast to the squid AP, the temporal profile of the currents underlying APs of some mammalian neurons are nearly perfectly matched to the optimized properties of ionic conductances so as to minimize the ATP cost.


Subject(s)
Action Potentials/physiology , Energy Metabolism/physiology , Models, Neurological , Neurons/physiology , Animals , Brachyura , Electric Conductivity , Loligo , Mice , Potassium Channels , Rats , Sodium Channels , Temperature
20.
Conf Proc IEEE Eng Med Biol Soc ; 2005: 3636-9, 2005.
Article in English | MEDLINE | ID: mdl-17281014

ABSTRACT

This work presents a model of minimal time-continuous target-cell specific use-dependent short-term synaptic plasticity (STP) observed in the pyramidal cells that can account for both short-term depression and facilitation. In general it provides a concise and portable description that is useful for predicting synaptic responses to more complex patterns of simulation, for studies relating to circuit dynamics and for equating dynamic properties across different synaptic pathways between or within preparations. This model allows computation of postsynaptic responses by either facilitation or depression in the synapse thus exhibiting characteristics of dynamic synapses as that found during short-term synaptic plasticity, for any arbitrary pre-synaptic spike train in the presence of realistic background synaptic noise. Thus it allows us to see specific effect of the spike train on a neuronal lattice both small-scale and large-scale, so as to reveal the short-term plastic behavior in neurons.

SELECTION OF CITATIONS
SEARCH DETAIL
...