Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Neuroinformatics ; 21(2): 375-406, 2023 04.
Article in English | MEDLINE | ID: mdl-36959372

ABSTRACT

Neural networks, composed of many neurons and governed by complex interactions between them, are a widely accepted formalism for modeling and exploring global dynamics and emergent properties in brain systems. In the past decades, experimental evidence of computationally relevant neuron-astrocyte interactions, as well as the astrocytic modulation of global neural dynamics, have accumulated. These findings motivated advances in computational glioscience and inspired several models integrating mechanisms of neuron-astrocyte interactions into the standard neural network formalism. These models were developed to study, for example, synchronization, information transfer, synaptic plasticity, and hyperexcitability, as well as classification tasks and hardware implementations. We here focus on network models of at least two neurons interacting bidirectionally with at least two astrocytes that include explicitly modeled astrocytic calcium dynamics. In this study, we analyze the evolution of these models and the biophysical, biochemical, cellular, and network mechanisms used to construct them. Based on our analysis, we propose how to systematically describe and categorize interaction schemes between cells in neuron-astrocyte networks. We additionally study the models in view of the existing experimental data and present future perspectives. Our analysis is an important first step towards understanding astrocytic contribution to brain functions. However, more advances are needed to collect comprehensive data about astrocyte morphology and physiology in vivo and to better integrate them in data-driven computational models. Broadening the discussion about theoretical approaches and expanding the computational tools is necessary to better understand astrocytes' roles in brain functions.


Subject(s)
Astrocytes , Models, Neurological , Astrocytes/physiology , Neurons/physiology , Synapses/physiology , Neural Networks, Computer
2.
Adv Exp Med Biol ; 1359: 87-103, 2022.
Article in English | MEDLINE | ID: mdl-35471536

ABSTRACT

Recent evidence suggests that glial cells take an active role in a number of brain functions that were previously attributed solely to neurons. For example, astrocytes, one type of glial cells, have been shown to promote coordinated activation of neuronal networks, modulate sensory-evoked neuronal network activity, and influence brain state transitions during development. This reinforces the idea that astrocytes not only provide the "housekeeping" for the neurons, but that they also play a vital role in supporting and expanding the functions of brain circuits and networks. Despite this accumulated knowledge, the field of computational neuroscience has mostly focused on modeling neuronal functions, ignoring the glial cells and the interactions they have with the neurons. In this chapter, we introduce the biology of neuron-glia interactions, summarize the existing computational models and tools, and emphasize the glial properties that may be important in modeling brain functions in the future.


Subject(s)
Neuroglia , Neurosciences , Astrocytes , Brain/physiology , Neuroglia/physiology , Neurons/physiology
3.
Front Cell Neurosci ; 13: 377, 2019.
Article in English | MEDLINE | ID: mdl-31555093

ABSTRACT

Spontaneous network activity plays a fundamental role in the formation of functional networks during early development. The landmark of this activity is the recurrent emergence of intensive time-limited network bursts (NBs) rapidly spreading across the entire dissociated culture in vitro. The main excitatory mediators of NBs are glutamatergic alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPARs) and N-Methyl-D-aspartic-acid receptors (NMDARs) that express fast and slow ion channel kinetics, respectively. The fast inhibition of the activity is mediated through gamma-aminobutyric acid type A receptors (GABAARs). Although the AMPAR, NMDAR and GABAAR kinetics have been biophysically characterized in detail at the monosynaptic level in a variety of brain areas, the unique features of NBs emerging from the kinetics and the complex interplay of these receptors are not well understood. The goal of this study is to analyze the contribution of fast GABAARs on AMPAR- and NMDAR- mediated spontaneous NB activity in dissociated neonatal rat cortical cultures at 3 weeks in vitro. The networks were probed by both acute and gradual application of each excitatory receptor antagonist and combinations of acute excitatory and inhibitory receptor antagonists. At the same time, the extracellular network-wide activity was recorded with microelectrode arrays (MEAs). We analyzed the characteristic NB measures extracted from NB rate profiles and the distributions of interspike intervals, interburst intervals, and electrode recruitment time as well as the similarity of spatio-temporal patterns of network activity under different receptor antagonists. We show that NBs were rapidly initiated and recruited as well as diversely propagated by AMPARs and temporally and spatially maintained by NMDARs. GABAARs reduced the spiking frequency in AMPAR-mediated networks and dampened the termination of NBs in NMDAR-mediated networks as well as slowed down the recruitment of activity in all networks. Finally, we show characteristic super bursts composed of slow NBs with highly repetitive spatio-temporal patterns in gradually AMPAR blocked networks. To the best of our knowledge, this study is the first to unravel in detail how the three main mediators of synaptic transmission uniquely shape the NB characteristics, such as the initiation, maintenance, recruitment and termination of NBs in cortical cell cultures in vitro.

4.
Front Neuroinform ; 12: 20, 2018.
Article in English | MEDLINE | ID: mdl-29765315

ABSTRACT

The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.

5.
Front Neuroanat ; 9: 76, 2015.
Article in English | MEDLINE | ID: mdl-26113811

ABSTRACT

We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.

6.
PLoS One ; 8(7): e69373, 2013.
Article in English | MEDLINE | ID: mdl-23935998

ABSTRACT

The question of how the structure of a neuronal network affects its functionality has gained a lot of attention in neuroscience. However, the vast majority of the studies on structure-dynamics relationships consider few types of network structures and assess limited numbers of structural measures. In this in silico study, we employ a wide diversity of network topologies and search among many possibilities the aspects of structure that have the greatest effect on the network excitability. The network activity is simulated using two point-neuron models, where the neurons are activated by noisy fluctuation of the membrane potential and their connections are described by chemical synapse models, and statistics on the number and quality of the emergent network bursts are collected for each network type. We apply a prediction framework to the obtained data in order to find out the most relevant aspects of network structure. In this framework, predictors that use different sets of graph-theoretic measures are trained to estimate the activity properties, such as burst count or burst length, of the networks. The performances of these predictors are compared with each other. We show that the best performance in prediction of activity properties for networks with sharp in-degree distribution is obtained when the prediction is based on clustering coefficient. By contrast, for networks with broad in-degree distribution, the maximum eigenvalue of the connectivity graph gives the most accurate prediction. The results shown for small ([Formula: see text]) networks hold with few exceptions when different neuron models, different choices of neuron population and different average degrees are applied. We confirm our conclusions using larger ([Formula: see text]) networks as well. Our findings reveal the relevance of different aspects of network structure from the viewpoint of network excitability, and our integrative method could serve as a general framework for structure-dynamics studies in biosciences.


Subject(s)
Algorithms , Models, Neurological , Neural Networks, Computer , Animals , Cluster Analysis , Computer Simulation , Humans , Nerve Net/physiology , Neurons/cytology , Neurons/physiology , Synapses/physiology
7.
Article in English | MEDLINE | ID: mdl-21852970

ABSTRACT

Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.

SELECTION OF CITATIONS
SEARCH DETAIL
...