Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52.026
Filter
1.
Nat Commun ; 15(1): 4693, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824154

ABSTRACT

Training large neural networks on big datasets requires significant computational resources and time. Transfer learning reduces training time by pre-training a base model on one dataset and transferring the knowledge to a new model for another dataset. However, current choices of transfer learning algorithms are limited because the transferred models always have to adhere to the dimensions of the base model and can not easily modify the neural architecture to solve other datasets. On the other hand, biological neural networks (BNNs) are adept at rearranging themselves to tackle completely different problems using transfer learning. Taking advantage of BNNs, we design a dynamic neural network that is transferable to any other network architecture and can accommodate many datasets. Our approach uses raytracing to connect neurons in a three-dimensional space, allowing the network to grow into any shape or size. In the Alcala dataset, our transfer learning algorithm trains the fastest across changing environments and input sizes. In addition, we show that our algorithm also outperformance the state of the art in EEG dataset. In the future, this network may be considered for implementation on real biological neural networks to decrease power consumption.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Neurons/physiology , Electroencephalography , Machine Learning , Models, Neurological
2.
Nat Commun ; 15(1): 4829, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844438

ABSTRACT

Orientation or axial selectivity, the property of neurons in the visual system to respond preferentially to certain angles of visual stimuli, plays a pivotal role in our understanding of visual perception and information processing. This computation is performed as early as the retina, and although much work has established the cellular mechanisms of retinal orientation selectivity, how this computation is organized across the retina is unknown. Using a large dataset collected across the mouse retina, we demonstrate functional organization rules of retinal orientation selectivity. First, we identify three major functional classes of retinal cells that are orientation selective and match previous descriptions. Second, we show that one orientation is predominantly represented in the retina and that this predominant orientation changes as a function of retinal location. Third, we demonstrate that neural activity plays little role on the organization of retinal orientation selectivity. Lastly, we use in silico modeling followed by validation experiments to demonstrate that the overrepresented orientation aligns along concentric axes. These results demonstrate that, similar to direction selectivity, orientation selectivity is organized in a functional map as early as the retina.


Subject(s)
Orientation , Retina , Animals , Retina/physiology , Mice , Orientation/physiology , Photic Stimulation , Mice, Inbred C57BL , Computer Simulation , Visual Perception/physiology , Models, Neurological , Orientation, Spatial/physiology , Retinal Ganglion Cells/physiology
3.
Bull Math Biol ; 86(7): 82, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38837083

ABSTRACT

Many neurodegenerative diseases (NDs) are characterized by the slow spatial spread of toxic protein species in the brain. The toxic proteins can induce neuronal stress, triggering the Unfolded Protein Response (UPR), which slows or stops protein translation and can indirectly reduce the toxic load. However, the UPR may also trigger processes leading to apoptotic cell death and the UPR is implicated in the progression of several NDs. In this paper, we develop a novel mathematical model to describe the spatiotemporal dynamics of the UPR mechanism for prion diseases. Our model is centered around a single neuron, with representative proteins P (healthy) and S (toxic) interacting with heterodimer dynamics (S interacts with P to form two S's). The model takes the form of a coupled system of nonlinear reaction-diffusion equations with a delayed, nonlinear flux for P (delay from the UPR). Through the delay, we find parameter regimes that exhibit oscillations in the P- and S-protein levels. We find that oscillations are more pronounced when the S-clearance rate and S-diffusivity are small in comparison to the P-clearance rate and P-diffusivity, respectively. The oscillations become more pronounced as delays in initiating the UPR increase. We also consider quasi-realistic clinical parameters to understand how possible drug therapies can alter the course of a prion disease. We find that decreasing the production of P, decreasing the recruitment rate, increasing the diffusivity of S, increasing the UPR S-threshold, and increasing the S clearance rate appear to be the most powerful modifications to reduce the mean UPR intensity and potentially moderate the disease progression.


Subject(s)
Mathematical Concepts , Models, Neurological , Neurons , Prion Diseases , Unfolded Protein Response , Unfolded Protein Response/physiology , Prion Diseases/metabolism , Prion Diseases/pathology , Prion Diseases/physiopathology , Neurons/metabolism , Humans , Animals , Nonlinear Dynamics , Computer Simulation , Prions/metabolism , Spatio-Temporal Analysis , Apoptosis
4.
Chaos ; 34(6)2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38838102

ABSTRACT

This paper introduces two novel scores for detecting local perturbations in networks. For this, we consider a non-Euclidean representation of networks, namely, their embedding onto the Poincaré disk model of hyperbolic geometry. We numerically evaluate the performances of these scores for the detection and localization of perturbations on homogeneous and heterogeneous network models. To illustrate our approach, we study latent geometric representations of real brain networks to identify and quantify the impact of epilepsy surgery on brain regions. Results suggest that our approach can provide a powerful tool for representing and analyzing changes in brain networks following surgical intervention, marking the first application of geometric network embedding in epilepsy research.


Subject(s)
Brain , Nerve Net , Humans , Nerve Net/physiology , Brain/physiology , Epilepsy/physiopathology , Models, Neurological , Algorithms , Computer Simulation
5.
Phys Rev Lett ; 132(21): 218403, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38856286

ABSTRACT

Sleep is characterized by nonrapid eye movement sleep, originating from widespread neuronal synchrony, and rapid eye movement sleep, with neuronal desynchronization akin to waking behavior. While these were thought to be global brain states, recent research suggests otherwise. Using time-frequency analysis of mesoscopic voltage-sensitive dye recordings of mice in a urethane-anesthetized model of sleep, we find transient neural desynchronization occurring heterogeneously across the cortex within a background of synchronized neural activity, in a manner reminiscent of a critical spreading process and indicative of an "edge-of-synchronization" phase transition.


Subject(s)
Sleep , Animals , Mice , Sleep/physiology , Neurons/physiology , Models, Neurological , Spatio-Temporal Analysis , Electroencephalography/methods , Brain/physiology
6.
Philos Trans R Soc Lond B Biol Sci ; 379(1906): 20230224, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-38853547

ABSTRACT

Synapses form trillions of connections in the brain. Long-term potentiation (LTP) and long-term depression (LTD) are cellular mechanisms vital for learning that modify the strength and structure of synapses. Three-dimensional reconstruction from serial section electron microscopy reveals three distinct pre- to post-synaptic arrangements: strong active zones (AZs) with tightly docked vesicles, weak AZs with loose or non-docked vesicles, and nascent zones (NZs) with a postsynaptic density but no presynaptic vesicles. Importantly, LTP can be temporarily saturated preventing further increases in synaptic strength. At the onset of LTP, vesicles are recruited to NZs, converting them to AZs. During recovery of LTP from saturation (1-4 h), new NZs form, especially on spines where AZs are most enlarged by LTP. Sentinel spines contain smooth endoplasmic reticulum (SER), have the largest synapses and form clusters with smaller spines lacking SER after LTP recovers. We propose a model whereby NZ plasticity provides synapse-specific AZ expansion during LTP and loss of weak AZs that drive synapse shrinkage during LTD. Spine clusters become functionally engaged during LTP or disassembled during LTD. Saturation of LTP or LTD probably acts to protect recently formed memories from ongoing plasticity and may account for the advantage of spaced over massed learning. This article is part of a discussion meeting issue 'Long-term potentiation: 50 years on'.


Subject(s)
Long-Term Potentiation , Long-Term Synaptic Depression , Neuronal Plasticity , Synapses , Long-Term Potentiation/physiology , Synapses/physiology , Long-Term Synaptic Depression/physiology , Animals , Neuronal Plasticity/physiology , Models, Neurological , Dendritic Spines/physiology
7.
Philos Trans R Soc Lond B Biol Sci ; 379(1906): 20230235, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-38853561

ABSTRACT

Which proportion of the long-term potentiation (LTP) expressed in the bulk of excitatory synapses is postsynaptic and which presynaptic remains debatable. To understand better the possible impact of either LTP form, we explored a realistic model of a CA1 pyramidal cell equipped with known membrane mechanisms and multiple, stochastic excitatory axo-spinous synapses. Our simulations were designed to establish an input-output transfer function, the dependence between the frequency of presynaptic action potentials triggering probabilistic synaptic discharges and the average frequency of postsynaptic spiking. We found that, within the typical physiological range, potentiation of the postsynaptic current results in a greater overall output than an equivalent increase in presynaptic release probability. This difference grows stronger at lower input frequencies and lower release probabilities. Simulations with a non-hierarchical circular network of principal neurons indicated that equal increases in either synaptic fidelity or synaptic strength of individual connections also produce distinct changes in network activity, although the network phenomenology is likely to be complex. These observations should help to interpret the machinery of LTP phenomena documented in situ. This article is part of a discussion meeting issue 'Long-term potentiation: 50 years on'.


Subject(s)
Long-Term Potentiation , Models, Neurological , Synapses , Long-Term Potentiation/physiology , Synapses/physiology , Pyramidal Cells/physiology , Animals , Computer Simulation , Action Potentials/physiology , CA1 Region, Hippocampal/physiology
8.
Philos Trans R Soc Lond B Biol Sci ; 379(1906): 20230237, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-38853570

ABSTRACT

The synaptic tagging and capture (STC) hypothesis lays the framework on the synapse-specific mechanism of protein synthesis-dependent long-term plasticity upon synaptic induction. Activated synapses will display a transient tag that will capture plasticity-related products (PRPs). These two events, tag setting and PRP synthesis, can be teased apart and have been studied extensively-from their electrophysiological and pharmacological properties to the molecular events involved. Consequently, the hypothesis also permits interactions of synaptic populations that encode different memories within the same neuronal population-hence, it gives rise to the associativity of plasticity. In this review, the recent advances and progress since the experimental debut of the STC hypothesis will be shared. This includes the role of neuromodulation in PRP synthesis and tag integrity, behavioural correlates of the hypothesis and modelling in silico. STC, as a more sensitive assay for synaptic health, can also assess neuronal aberrations. We will also expound how synaptic plasticity and associativity are altered in ageing-related decline and pathological conditions such as juvenile stress, cancer, sleep deprivation and Alzheimer's disease. This article is part of a discussion meeting issue 'Long-term potentiation: 50 years on'.


Subject(s)
Brain , Memory , Neuronal Plasticity , Synapses , Synapses/physiology , Humans , Neuronal Plasticity/physiology , Brain/physiology , Memory/physiology , Animals , Models, Neurological
9.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38851297

ABSTRACT

The development of the human central nervous system initiates in the early embryonic period until long after delivery. It has been shown that several neurological and neuropsychiatric diseases originate from prenatal incidents. Mathematical models offer a direct way to understand neurodevelopmental processes better. Mathematical modelling of neurodevelopment during the embryonic period is challenging in terms of how to 'Approach', how to initiate modelling and how to propose the appropriate equations that fit the underlying dynamics of neurodevelopment during the embryonic period while including the variety of elements that are built-in naturally during the process of neurodevelopment. It is imperative to answer where and how to start modelling; in other words, what is the appropriate 'Approach'? Therefore, one objective of this study was to tackle the mathematical issue broadly from different aspects and approaches. The approaches were divided into three embryonic categories: cell division, neural tube growth and neural plate growth. We concluded that the neural plate growth approach provides a suitable platform for simulation of brain formation/neurodevelopment compared to cell division and neural tube growth. We devised a novel equation and designed algorithms that include geometrical and topological algorithms that could fit most of the necessary elements of the neurodevelopmental process during the embryonic period. Hence, the proposed equations and defined mathematical structure would be a platform to generate an artificial neural network that autonomously grows and develops.


Subject(s)
Neural Tube , Humans , Neural Tube/embryology , Neurogenesis , Neurons/cytology , Algorithms , Models, Neurological , Animals , Neural Networks, Computer , Cell Division , Embryonic Development , Neural Plate/cytology , Neural Plate/embryology
10.
J Vis ; 24(6): 1, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38829629

ABSTRACT

Computational models of the primary visual cortex (V1) have suggested that V1 neurons behave like Gabor filters followed by simple nonlinearities. However, recent work employing convolutional neural network (CNN) models has suggested that V1 relies on far more nonlinear computations than previously thought. Specifically, unit responses in an intermediate layer of VGG-19 were found to best predict macaque V1 responses to thousands of natural and synthetic images. Here, we evaluated the hypothesis that the poor performance of lower layer units in VGG-19 might be attributable to their small receptive field size rather than to their lack of complexity per se. We compared VGG-19 with AlexNet, which has much larger receptive fields in its lower layers. Whereas the best-performing layer of VGG-19 occurred after seven nonlinear steps, the first convolutional layer of AlexNet best predicted V1 responses. Although the predictive accuracy of VGG-19 was somewhat better than that of standard AlexNet, we found that a modified version of AlexNet could match the performance of VGG-19 after only a few nonlinear computations. Control analyses revealed that decreasing the size of the input images caused the best-performing layer of VGG-19 to shift to a lower layer, consistent with the hypothesis that the relationship between image size and receptive field size can strongly affect model performance. We conducted additional analyses using a Gabor pyramid model to test for nonlinear contributions of normalization and contrast saturation. Overall, our findings suggest that the feedforward responses of V1 neurons can be well explained by assuming only a few nonlinear processing stages.


Subject(s)
Neural Networks, Computer , Neurons , Animals , Neurons/physiology , Primary Visual Cortex/physiology , Photic Stimulation/methods , Models, Neurological , Macaca , Visual Cortex/physiology , Nonlinear Dynamics
11.
Commun Biol ; 7(1): 689, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38839931

ABSTRACT

Advanced methods such as REACT have allowed the integration of fMRI with the brain's receptor landscape, providing novel insights transcending the multiscale organisation of the brain. Similarly, normative modelling has allowed translational neuroscience to move beyond group-average differences and characterise deviations from health at an individual level. Here, we bring these methods together for the first time. We used REACT to create functional networks enriched with the main modulatory, inhibitory, and excitatory neurotransmitter systems and generated normative models of these networks to capture functional connectivity deviations in patients with schizophrenia, bipolar disorder (BPD), and ADHD. Substantial overlap was seen in symptomatology and deviations from normality across groups, but these could be mapped into a common space linking constellations of symptoms through to underlying neurobiology transdiagnostically. This work provides impetus for developing novel biomarkers that characterise molecular- and systems-level dysfunction at the individual level, facilitating the transition towards mechanistically targeted treatments.


Subject(s)
Magnetic Resonance Imaging , Schizophrenia , Humans , Schizophrenia/physiopathology , Schizophrenia/diagnostic imaging , Adult , Male , Brain/physiopathology , Brain/diagnostic imaging , Female , Bipolar Disorder/physiopathology , Attention Deficit Disorder with Hyperactivity/physiopathology , Attention Deficit Disorder with Hyperactivity/diagnostic imaging , Mental Disorders/physiopathology , Mental Disorders/diagnostic imaging , Young Adult , Models, Neurological , Middle Aged , Nerve Net/physiopathology , Nerve Net/diagnostic imaging
12.
Chaos ; 34(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717399

ABSTRACT

Neuronal activity gives rise to behavior, and behavior influences neuronal dynamics, in a closed-loop control system. Is it possible then, to find a relationship between the statistical properties of behavior and neuronal dynamics? Measurements of neuronal activity and behavior have suggested a direct relationship between scale-free neuronal and behavioral dynamics. Yet, these studies captured only local dynamics in brain sub-networks. Here, we investigate the relationship between internal dynamics and output statistics in a mathematical model system where we have access to the dynamics of all network units. We train a recurrent neural network (RNN), initialized in a high-dimensional chaotic state, to sustain behavioral states for durations following a power-law distribution as observed experimentally. Changes in network connectivity due to training affect the internal dynamics of neuronal firings, leading to neuronal avalanche size distributions approximating power-laws over some ranges. Yet, randomizing the changes in network connectivity can leave these power-law features largely unaltered. Specifically, whereas neuronal avalanche duration distributions show some variations between RNNs with trained and randomized decoders, neuronal avalanche size distributions are invariant, in the total population and in output-correlated sub-populations. This is true independent of whether the randomized decoders preserve power-law distributed behavioral dynamics. This demonstrates that a one-to-one correspondence between the considered statistical features of behavior and neuronal dynamics cannot be established and their relationship is non-trivial. Our findings also indicate that statistical properties of the intrinsic dynamics may be preserved, even as the internal state responsible for generating the desired output dynamics is perturbed.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology , Neural Networks, Computer , Nerve Net/physiology , Nonlinear Dynamics , Behavior , Humans , Animals
13.
Commun Biol ; 7(1): 555, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724614

ABSTRACT

Spatio-temporal activity patterns have been observed in a variety of brain areas in spontaneous activity, prior to or during action, or in response to stimuli. Biological mechanisms endowing neurons with the ability to distinguish between different sequences remain largely unknown. Learning sequences of spikes raises multiple challenges, such as maintaining in memory spike history and discriminating partially overlapping sequences. Here, we show that anti-Hebbian spike-timing dependent plasticity (STDP), as observed at cortico-striatal synapses, can naturally lead to learning spike sequences. We design a spiking model of the striatal output neuron receiving spike patterns defined as sequential input from a fixed set of cortical neurons. We use a simple synaptic plasticity rule that combines anti-Hebbian STDP and non-associative potentiation for a subset of the presented patterns called rewarded patterns. We study the ability of striatal output neurons to discriminate rewarded from non-rewarded patterns by firing only after the presentation of a rewarded pattern. In particular, we show that two biological properties of striatal networks, spiking latency and collateral inhibition, contribute to an increase in accuracy, by allowing a better discrimination of partially overlapping sequences. These results suggest that anti-Hebbian STDP may serve as a biological substrate for learning sequences of spikes.


Subject(s)
Corpus Striatum , Learning , Neuronal Plasticity , Neuronal Plasticity/physiology , Learning/physiology , Corpus Striatum/physiology , Models, Neurological , Animals , Action Potentials/physiology , Neurons/physiology , Humans
14.
J Math Biol ; 89(1): 3, 2024 May 13.
Article in English | MEDLINE | ID: mdl-38740613

ABSTRACT

Dynamical systems on networks typically involve several dynamical processes evolving at different timescales. For instance, in Alzheimer's disease, the spread of toxic protein throughout the brain not only disrupts neuronal activity but is also influenced by neuronal activity itself, establishing a feedback loop between the fast neuronal activity and the slow protein spreading. Motivated by the case of Alzheimer's disease, we study the multiple-timescale dynamics of a heterodimer spreading process on an adaptive network of Kuramoto oscillators. Using a minimal two-node model, we establish that heterogeneous oscillatory activity facilitates toxic outbreaks and induces symmetry breaking in the spreading patterns. We then extend the model formulation to larger networks and perform numerical simulations of the slow-fast dynamics on common network motifs and on the brain connectome. The simulations corroborate the findings from the minimal model, underscoring the significance of multiple-timescale dynamics in the modeling of neurodegenerative diseases.


Subject(s)
Alzheimer Disease , Brain , Computer Simulation , Mathematical Concepts , Models, Neurological , Neurons , Humans , Alzheimer Disease/physiopathology , Neurons/physiology , Brain/physiopathology , Connectome , Neurodegenerative Diseases/physiopathology , Neurodegenerative Diseases/pathology , Nerve Net/physiopathology , Nerve Net/physiology
15.
J Math Biol ; 89(1): 4, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38750128

ABSTRACT

A system of partial differential equations is developed to study the spreading of tau pathology in the brain for Alzheimer's and other neurodegenerative diseases. Two cases are considered with one assuming intracellular diffusion through synaptic activities or the nanotubes that connect the adjacent cells. The other, in addition to intracellular spreading, takes into account of the secretion of the tau species which are able to diffuse, move with the interstitial fluid flow and subsequently taken up by the surrounding cells providing an alternative pathway for disease spreading. Cross membrane transport of the tau species are considered enabling us to examine the role of extracellular clearance of tau protein on the disease status. Bifurcation analysis is carried out for the steady states of the spatially homogeneous system yielding the results that fast cross-membrane transport combined with effective extracellular clearance is key to maintain the brain's healthy status. Numerical simulations of the first case exhibit solutions of travelling wave form describing the gradual outward spreading of the pathology; whereas the second case shows faster spreading with the buildup of neurofibrillary tangles quickly elevated throughout. Our investigation thus indicates that the gradual progression of the intracellular spreading case is more consistent with the clinical observations of the development of Alzheimer's disease.


Subject(s)
Alzheimer Disease , Brain , Computer Simulation , Mathematical Concepts , Neurodegenerative Diseases , tau Proteins , tau Proteins/metabolism , Humans , Alzheimer Disease/metabolism , Alzheimer Disease/pathology , Neurodegenerative Diseases/metabolism , Neurodegenerative Diseases/pathology , Brain/metabolism , Brain/pathology , Models, Neurological , Neurofibrillary Tangles/metabolism , Neurofibrillary Tangles/pathology , Models, Biological , Disease Progression , Tauopathies/metabolism , Tauopathies/pathology
16.
PLoS Comput Biol ; 20(5): e1012074, 2024 May.
Article in English | MEDLINE | ID: mdl-38696532

ABSTRACT

We investigate the ability of the pairwise maximum entropy (PME) model to describe the spiking activity of large populations of neurons recorded from the visual, auditory, motor, and somatosensory cortices. To quantify this performance, we use (1) Kullback-Leibler (KL) divergences, (2) the extent to which the pairwise model predicts third-order correlations, and (3) its ability to predict the probability that multiple neurons are simultaneously active. We compare these with the performance of a model with independent neurons and study the relationship between the different performance measures, while varying the population size, mean firing rate of the chosen population, and the bin size used for binarizing the data. We confirm the previously reported excellent performance of the PME model for small population sizes N < 20. But we also find that larger mean firing rates and bin sizes generally decreases performance. The performance for larger populations were generally not as good. For large populations, pairwise models may be good in terms of predicting third-order correlations and the probability of multiple neurons being active, but still significantly worse than small populations in terms of their improvement over the independent model in KL-divergence. We show that these results are independent of the cortical area and of whether approximate methods or Boltzmann learning are used for inferring the pairwise couplings. We compared the scaling of the inferred couplings with N and find it to be well explained by the Sherrington-Kirkpatrick (SK) model, whose strong coupling regime shows a complex phase with many metastable states. We find that, up to the maximum population size studied here, the fitted PME model remains outside its complex phase. However, the standard deviation of the couplings compared to their mean increases, and the model gets closer to the boundary of the complex phase as the population size grows.


Subject(s)
Entropy , Models, Neurological , Neurons , Animals , Neurons/physiology , Cerebral Cortex/physiology , Action Potentials/physiology , Computational Biology , Computer Simulation
17.
Chaos ; 34(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38775681

ABSTRACT

We consider a heterogeneous, globally coupled population of excitatory quadratic integrate-and-fire neurons with excitability adaptation due to a metabolic feedback associated with ketogenic diet, a form of therapy for epilepsy. Bifurcation analysis of a three-dimensional mean-field system derived in the framework of next-generation neural mass models allows us to explain the scenarios and suggest control strategies for the transitions between the neurophysiologically desired asynchronous states and the synchronous, seizure-like states featuring collective oscillations. We reveal two qualitatively different scenarios for the onset of synchrony. For weaker couplings, a bistability region between the lower- and the higher-activity asynchronous states unfolds from the cusp point, and the collective oscillations emerge via a supercritical Hopf bifurcation. For stronger couplings, one finds seven co-dimension two bifurcation points, including pairs of Bogdanov-Takens and generalized Hopf points, such that both lower- and higher-activity asynchronous states undergo transitions to collective oscillations, with hysteresis and jump-like behavior observed in vicinity of subcritical Hopf bifurcations. We demonstrate three control mechanisms for switching between asynchronous and synchronous states, involving parametric perturbation of the adenosine triphosphate (ATP) production rate, external stimulation currents, or pulse-like ATP shocks, and indicate a potential therapeutic advantage of hysteretic scenarios.


Subject(s)
Adaptation, Physiological , Diet, Ketogenic , Models, Neurological , Neurons , Seizures , Neurons/metabolism , Seizures/physiopathology , Humans , Adenosine Triphosphate/metabolism
18.
Dialogues Clin Neurosci ; 26(1): 1-23, 2024.
Article in English | MEDLINE | ID: mdl-38767966

ABSTRACT

We introduce here a general model of Functional Neurological Disorders based on the following hypothesis: a Functional Neurological Disorder could correspond to a consciously initiated voluntary top-down process causing involuntary lasting consequences that are consciously experienced and subjectively interpreted by the patient as involuntary. We develop this central hypothesis according to Global Neuronal Workspace theory of consciousness, that is particularly suited to describe interactions between conscious and non-conscious cognitive processes. We then present a list of predictions defining a research program aimed at empirically testing their validity. Finally, this general model leads us to reinterpret the long-debated links between hypnotic suggestion and functional neurological disorders. Driven by both scientific and therapeutic goals, this theoretical paper aims at bringing closer the psychiatric and neurological worlds of functional neurological disorders with the latest developments of cognitive neuroscience of consciousness.


Subject(s)
Consciousness , Nervous System Diseases , Humans , Nervous System Diseases/psychology , Nervous System Diseases/physiopathology , Consciousness/physiology , Models, Neurological , Neurons/physiology , Brain/physiopathology , Brain/physiology
19.
PLoS One ; 19(5): e0303822, 2024.
Article in English | MEDLINE | ID: mdl-38771746

ABSTRACT

This paper provides a comprehensive and computationally efficient case study for uncertainty quantification (UQ) and global sensitivity analysis (GSA) in a neuron model incorporating ion concentration dynamics. We address how challenges with UQ and GSA in this context can be approached and solved, including challenges related to computational cost, parameters affecting the system's resting state, and the presence of both fast and slow dynamics. Specifically, we analyze the electrodiffusive neuron-extracellular-glia (edNEG) model, which captures electrical potentials, ion concentrations (Na+, K+, Ca2+, and Cl-), and volume changes across six compartments. Our methodology includes a UQ procedure assessing the model's reliability and susceptibility to input uncertainty and a variance-based GSA identifying the most influential input parameters. To mitigate computational costs, we employ surrogate modeling techniques, optimized using efficient numerical integration methods. We propose a strategy for isolating parameters affecting the resting state and analyze the edNEG model dynamics under both physiological and pathological conditions. The influence of uncertain parameters on model outputs, particularly during spiking dynamics, is systematically explored. Rapid dynamics of membrane potentials necessitate a focus on informative spiking features, while slower variations in ion concentrations allow a meaningful study at each time point. Our study offers valuable guidelines for future UQ and GSA investigations on neuron models with ion concentration dynamics, contributing to the broader application of such models in computational neuroscience.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology , Uncertainty , Ions/metabolism , Membrane Potentials/physiology , Action Potentials/physiology , Humans , Animals , Neuroglia/metabolism , Neuroglia/physiology
20.
Article in English | MEDLINE | ID: mdl-38722722

ABSTRACT

Neural decoding is still a challenging and a hot topic in neurocomputing science. Recently, many studies have shown that brain network patterns containing rich spatiotemporal structural information represent the brain's activation information under external stimuli. In the traditional method, brain network features are directly obtained using the standard machine learning method and provide to a classifier, subsequently decoding external stimuli. However, this method cannot effectively extract the multidimensional structural information hidden in the brain network. Furthermore, studies on tensors have show that the tensor decomposition model can fully mine unique spatiotemporal structural characteristics of a spatiotemporal structure in data with a multidimensional structure. This research proposed a stimulus-constrained Tensor Brain Network (s-TBN) model that involves the tensor decomposition and stimulus category-constraint information. The model was verified on real neuroimaging data obtained via magnetoencephalograph and functional mangetic resonance imaging). Experimental results show that the s-TBN model achieve accuracy matrices of greater than 11.06% and 18.46% on the accuracy matrix compared with other methods on two modal datasets. These results prove the superiority of extracting discriminative characteristics using the STN model, especially for decoding object stimuli with semantic information.


Subject(s)
Algorithms , Machine Learning , Magnetic Resonance Imaging , Magnetoencephalography , Humans , Magnetoencephalography/methods , Brain/physiology , Brain/diagnostic imaging , Neural Networks, Computer , Models, Neurological , Adult , Male , Reproducibility of Results , Female , Nerve Net/physiology , Nerve Net/diagnostic imaging , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...