Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 43
Filter
Add more filters










Publication year range
1.
Front Comput Neurosci ; 11: 119, 2017.
Article in English | MEDLINE | ID: mdl-29375358

ABSTRACT

Spike Timing-Dependent Plasticity has been found to assume many different forms. The classic STDP curve, with one potentiating and one depressing window, is only one of many possible curves that describe synaptic learning using the STDP mechanism. It has been shown experimentally that STDP curves may contain multiple LTP and LTD windows of variable width, and even inverted windows. The underlying STDP mechanism that is capable of producing such an extensive, and apparently incompatible, range of learning curves is still under investigation. In this paper, it is shown that STDP originates from a combination of two dynamic Hebbian cross-correlations of local activity at the synapse. The correlation of the presynaptic activity with the local postsynaptic activity is a robust and reliable indicator of the discrepancy between the presynaptic neuron and the postsynaptic neuron's activity. The second correlation is between the local postsynaptic activity with dendritic activity which is a good indicator of matching local synaptic and dendritic activity. We show that this simple time-independent learning rule can give rise to many forms of the STDP learning curve. The rule regulates synaptic strength without the need for spike matching or other supervisory learning mechanisms. Local differences in dendritic activity at the synapse greatly affect the cross-correlation difference which determines the relative contributions of different neural activity sources. Dendritic activity due to nearby synapses, action potentials, both forward and back-propagating, as well as inhibitory synapses will dynamically modify the local activity at the synapse, and the resulting STDP learning rule. The dynamic Hebbian learning rule ensures furthermore, that the resulting synaptic strength is dynamically stable, and that interactions between synapses do not result in local instabilities. The rule clearly demonstrates that synapses function as independent localized computational entities, each contributing to the global activity, not in a simply linear fashion, but in a manner that is appropriate to achieve local and global stability of the neuron and the entire dendritic structure.

3.
Front Comput Neurosci ; 9: 141, 2015.
Article in English | MEDLINE | ID: mdl-26635594

ABSTRACT

Oscillations in network activity are ubiquitous in the brain and are involved in diverse cognitive functions. Oscillation characteristics, such as power, frequency, and temporal structure, depend on both network connectivity and intrinsic cellular properties, such as ion channel composition. An important class of channels, with key roles in regulating cell excitability, are h-channels. The h-current (Ih) is a slow, hyperpolarization-activated, depolarizing current that contributes to neuronal resonance and membrane potential. The impact of Ih on network oscillations, however, remains poorly understood. To elucidate the network effects of Ih, we used a computational model of a generic oscillatory neuronal network consisting of inhibitory and excitatory cells that were externally driven by excitatory action potentials and sustained depolarizing currents. We found that Ih increased the oscillation frequency and, in combination with external action potentials, representing input from areas outside the network, strongly decreased the synchrony of firing. As a consequence, the oscillation power and the duration of episodes during which the network exhibited high-amplitude oscillations were greatly reduced in the presence of Ih. Our results suggest that modulation of Ih or impaired expression of h-channels, as observed in epilepsy, could, by affecting oscillation dynamics, markedly alter network-level activity and potentially influence oscillation-dependent cognitive processes such as learning, memory and attention.

4.
Front Neuroanat ; 8: 115, 2014.
Article in English | MEDLINE | ID: mdl-25360087

ABSTRACT

After brain lesions caused by tumors or stroke, or after lasting loss of input (deafferentation), inter- and intra-regional brain networks respond with complex changes in topology. Not only areas directly affected by the lesion but also regions remote from the lesion may alter their connectivity-a phenomenon known as diaschisis. Changes in network topology after brain lesions can lead to cognitive decline and increasing functional disability. However, the principles governing changes in network topology are poorly understood. Here, we investigated whether homeostatic structural plasticity can account for changes in network topology after deafferentation and brain lesions. Homeostatic structural plasticity postulates that neurons aim to maintain a desired level of electrical activity by deleting synapses when neuronal activity is too high and by providing new synaptic contacts when activity is too low. Using our Model of Structural Plasticity, we explored how local changes in connectivity induced by a focal loss of input affected global network topology. In accordance with experimental and clinical data, we found that after partial deafferentation, the network as a whole became more random, although it maintained its small-world topology, while deafferentated neurons increased their betweenness centrality as they rewired and returned to the homeostatic range of activity. Furthermore, deafferentated neurons increased their global but decreased their local efficiency and got longer tailed degree distributions, indicating the emergence of hub neurons. Together, our results suggest that homeostatic structural plasticity may be an important driving force for lesion-induced network reorganization and that the increase in betweenness centrality of deafferentated areas may hold as a biomarker for brain repair.

5.
PLoS One ; 9(7): e100899, 2014.
Article in English | MEDLINE | ID: mdl-25007325

ABSTRACT

Oscillations in electrical activity are a characteristic feature of many brain networks and display a wide variety of temporal patterns. A network may express a single oscillation frequency, alternate between two or more distinct frequencies, or continually express multiple frequencies. In addition, oscillation amplitude may fluctuate over time. The origin of this complex repertoire of activity remains unclear. Different cortical layers often produce distinct oscillation frequencies. To investigate whether interactions between different networks could contribute to the variety of oscillation patterns, we created two model networks, one generating on its own a relatively slow frequency (20 Hz; slow network) and one generating a fast frequency (32 Hz; fast network). Taking either the slow or the fast network as source network projecting connections to the other, or target, network, we systematically investigated how type and strength of inter-network connections affected target network activity. For high inter-network connection strengths, we found that the slow network was more effective at completely imposing its rhythm on the fast network than the other way around. The strongest entrainment occurred when excitatory cells of the slow network projected to excitatory or inhibitory cells of the fast network. The fast network most strongly imposed its rhythm on the slow network when its excitatory cells projected to excitatory cells of the slow network. Interestingly, for lower inter-network connection strengths, multiple frequencies coexisted in the target network. Just as observed in rat prefrontal cortex, the target network could express multiple frequencies at the same time, alternate between two distinct oscillation frequencies, or express a single frequency with alternating episodes of high and low amplitude. Together, our results suggest that input from other oscillating networks may markedly alter a network's frequency spectrum and may partly be responsible for the rich repertoire of temporal oscillation patterns observed in the brain.


Subject(s)
Computer Simulation , Models, Biological , Prefrontal Cortex/physiology , Action Potentials , Animals , Brain Waves , Connectome , Humans , Nerve Net , Rats
6.
Front Neuroanat ; 8: 54, 2014.
Article in English | MEDLINE | ID: mdl-25009472

ABSTRACT

Neuronal information processing in cortical networks critically depends on the organization of synaptic connectivity. Synaptic connections can form when axons and dendrites come in close proximity of each other. The spatial innervation of neuronal arborizations can be described by their axonal and dendritic density fields. Recently we showed that potential locations of synapses between neurons can be estimated from their overlapping axonal and dendritic density fields. However, deriving density fields from single-slice neuronal reconstructions is hampered by incompleteness because of cut branches. Here, we describe a method for recovering the lost axonal and dendritic mass. This so-called completion method is based on an estimation of the mass inside the slice and an extrapolation to the space outside the slice, assuming axial symmetry in the mass distribution. We validated the method using a set of neurons generated with our NETMORPH simulator. The model-generated neurons were artificially sliced and subsequently recovered by the completion method. Depending on slice thickness and arbor extent, branches that have lost their outside parents (orphan branches) may occur inside the slice. Not connected anymore to the contiguous structure of the sliced neuron, orphan branches result in an underestimation of neurite mass. For 300 µm thick slices, however, the validation showed a full recovery of dendritic and an almost full recovery of axonal mass. The completion method was applied to three experimental data sets of reconstructed rat cortical L2/3 pyramidal neurons. The results showed that in 300 µm thick slices intracortical axons lost about 50% and dendrites about 16% of their mass. The completion method can be applied to single-slice reconstructions as long as axial symmetry can be assumed in the mass distribution. This opens up the possibility of using incomplete neuronal reconstructions from open-access data bases to determine population mean mass density fields.

7.
Article in English | MEDLINE | ID: mdl-24744727

ABSTRACT

In networks with small-world topology, which are characterized by a high clustering coefficient and a short characteristic path length, information can be transmitted efficiently and at relatively low costs. The brain is composed of small-world networks, and evolution may have optimized brain connectivity for efficient information processing. Despite many studies on the impact of topology on information processing in neuronal networks, little is known about the development of network topology and the emergence of efficient small-world networks. We investigated how a simple growth process that favors short-range connections over long-range connections in combination with a synapse formation rule that generates homeostasis in post-synaptic firing rates shapes neuronal network topology. Interestingly, we found that small-world networks benefited from homeostasis by an increase in efficiency, defined as the averaged inverse of the shortest paths through the network. Efficiency particularly increased as small-world networks approached the desired level of electrical activity. Ultimately, homeostatic small-world networks became almost as efficient as random networks. The increase in efficiency was caused by the emergent property of the homeostatic growth process that neurons started forming more long-range connections, albeit at a low rate, when their electrical activity was close to the homeostatic set-point. Although global network topology continued to change when neuronal activities were around the homeostatic equilibrium, the small-world property of the network was maintained over the entire course of development. Our results may help understand how complex systems such as the brain could set up an efficient network topology in a self-organizing manner. Insights from our work may also lead to novel techniques for constructing large-scale neuronal networks by self-organization.

8.
J Neurophysiol ; 112(2): 287-99, 2014 Jul 15.
Article in English | MEDLINE | ID: mdl-24760781

ABSTRACT

Synaptic plasticity rules change during development: while hippocampal synapses can be potentiated by a single action potential pairing protocol in young neurons, mature neurons require burst firing to induce synaptic potentiation. An essential component for spike timing-dependent plasticity is the backpropagating action potential (BAP). BAP along the dendrites can be modulated by morphology and ion channel composition, both of which change during late postnatal development. However, it is unclear whether these dendritic changes can explain the developmental changes in synaptic plasticity induction rules. Here, we show that tonic GABAergic inhibition regulates dendritic action potential backpropagation in adolescent, but not preadolescent, CA1 pyramidal neurons. These developmental changes in tonic inhibition also altered the induction threshold for spike timing-dependent plasticity in adolescent neurons. This GABAergic regulatory effect on backpropagation is restricted to distal regions of apical dendrites (>200 µm) and mediated by α5-containing GABA(A) receptors. Direct dendritic recordings demonstrate α5-mediated tonic GABA(A) currents in adolescent neurons which can modulate BAPs. These developmental modulations in dendritic excitability could not be explained by concurrent changes in dendritic morphology. To explain our data, model simulations propose a distally increasing or localized distal expression of dendritic α5 tonic inhibition in mature neurons. Overall, our results demonstrate that dendritic integration and plasticity in more mature dendrites are significantly altered by tonic α5 inhibition in a dendritic region-specific and developmentally regulated manner.


Subject(s)
Action Potentials , CA1 Region, Hippocampal/physiology , Dendrites/physiology , GABA Antagonists/pharmacology , Neuronal Plasticity , Pyramidal Cells/physiology , Animals , CA1 Region, Hippocampal/cytology , CA1 Region, Hippocampal/growth & development , CA1 Region, Hippocampal/metabolism , Dendrites/drug effects , Dendrites/metabolism , Dendrites/ultrastructure , Excitatory Postsynaptic Potentials , GABA Agonists/pharmacology , Inhibitory Postsynaptic Potentials , Male , Pyramidal Cells/drug effects , Pyramidal Cells/growth & development , Pyramidal Cells/metabolism , Rats , Rats, Wistar , Receptors, GABA-A/metabolism
9.
PLoS One ; 9(1): e86526, 2014.
Article in English | MEDLINE | ID: mdl-24489738

ABSTRACT

Neuronal signal integration and information processing in cortical neuronal networks critically depend on the organization of synaptic connectivity. Because of the challenges involved in measuring a large number of neurons, synaptic connectivity is difficult to determine experimentally. Current computational methods for estimating connectivity typically rely on the juxtaposition of experimentally available neurons and applying mathematical techniques to compute estimates of neural connectivity. However, since the number of available neurons is very limited, these connectivity estimates may be subject to large uncertainties. We use a morpho-density field approach applied to a vast ensemble of model-generated neurons. A morpho-density field (MDF) describes the distribution of neural mass in the space around the neural soma. The estimated axonal and dendritic MDFs are derived from 100,000 model neurons that are generated by a stochastic phenomenological model of neurite outgrowth. These MDFs are then used to estimate the connectivity between pairs of neurons as a function of their inter-soma displacement. Compared with other density-field methods, our approach to estimating synaptic connectivity uses fewer restricting assumptions and produces connectivity estimates with a lower standard deviation. An important requirement is that the model-generated neurons reflect accurately the morphology and variation in morphology of the experimental neurons used for optimizing the model parameters. As such, the method remains subject to the uncertainties caused by the limited number of neurons in the experimental data set and by the quality of the model and the assumptions used in creating the MDFs and in calculating estimating connectivity. In summary, MDFs are a powerful tool for visualizing the spatial distribution of axonal and dendritic densities, for estimating the number of potential synapses between neurons with low standard deviation, and for obtaining a greater understanding of the relationship between neural morphology and network connectivity.


Subject(s)
Nerve Net/physiology , Neural Networks, Computer , Pyramidal Cells/physiology , Synapses/physiology , Animals , Cell Count , Computer Simulation , Rats , Synaptic Transmission
10.
PLoS One ; 9(2): e86741, 2014.
Article in English | MEDLINE | ID: mdl-24498280

ABSTRACT

Neurons form networks by growing out neurites that synaptically connect to other neurons. During this process, neurites develop complex branched trees. Interestingly, the outgrowth of neurite branches is often accompanied by the simultaneous withdrawal of other branches belonging to the same tree. This apparent competitive outgrowth between branches of the same neuron is relevant for the formation of synaptic connectivity, but the underlying mechanisms are unknown. An essential component of neurites is the cytoskeleton of microtubules, long polymers of tubulin dimers running throughout the entire neurite. To investigate whether competition between neurites can emerge from the dynamics of a resource such as tubulin, we developed a multi-compartmental model of neurite growth. In the model, tubulin is produced in the soma and transported by diffusion and active transport to the growth cones at the tip of the neurites, where it is assembled into microtubules to elongate the neurite. Just as in experimental studies, we find that the outgrowth of a neurite branch can lead to the simultaneous retraction of its neighboring branches. We show that these competitive interactions occur in simple neurite morphologies as well as in complex neurite arborizations and that in developing neurons competition for a growth resource such as tubulin can account for the differential outgrowth of neurite branches. The model predicts that competition between neurite branches decreases with path distance between growth cones, increases with path distance from growth cone to soma, and decreases with a higher rate of active transport. Together, our results suggest that competition between outgrowing neurites can already emerge from relatively simple and basic dynamics of a growth resource. Our findings point to the need to test the model predictions and to determine, by monitoring tubulin concentrations in outgrowing neurons, whether tubulin is the resource for which neurites compete.


Subject(s)
Algorithms , Growth Cones/physiology , Models, Neurological , Neurites/physiology , Animals , Cells, Cultured , Computer Simulation , Dendrites/physiology , Humans , Kinetics , Neurons/cytology , Neurons/physiology
11.
PLoS One ; 9(1): e85858, 2014.
Article in English | MEDLINE | ID: mdl-24454938

ABSTRACT

Neuronal signal integration and information processing in cortical networks critically depend on the organization of synaptic connectivity. During development, neurons can form synaptic connections when their axonal and dendritic arborizations come within close proximity of each other. Although many signaling cues are thought to be involved in guiding neuronal extensions, the extent to which accidental appositions between axons and dendrites can already account for synaptic connectivity remains unclear. To investigate this, we generated a local network of cortical L2/3 neurons that grew out independently of each other and that were not guided by any extracellular cues. Synapses were formed when axonal and dendritic branches came by chance within a threshold distance of each other. Despite the absence of guidance cues, we found that the emerging synaptic connectivity showed a good agreement with available experimental data on spatial locations of synapses on dendrites and axons, number of synapses by which neurons are connected, connection probability between neurons, distance between connected neurons, and pattern of synaptic connectivity. The connectivity pattern had a small-world topology but was not scale free. Together, our results suggest that baseline synaptic connectivity in local cortical circuits may largely result from accidentally overlapping axonal and dendritic branches of independently outgrowing neurons.


Subject(s)
Computer Simulation , Models, Biological , Neurons/physiology , Synapses/physiology , Animals , Cell Shape , Cells, Cultured , Dendrites/physiology , Nerve Net/cytology , Pyramidal Tracts/cytology , Rats , Software
12.
Front Comput Neurosci ; 7: 160, 2013.
Article in English | MEDLINE | ID: mdl-24324430

ABSTRACT

Neurons innervate space by extending axonal and dendritic arborizations. When axons and dendrites come in close proximity of each other, synapses between neurons can be formed. Neurons vary greatly in their morphologies and synaptic connections with other neurons. The size and shape of the arborizations determine the way neurons innervate space. A neuron may therefore be characterized by the spatial distribution of its axonal and dendritic "mass." A population mean "mass" density field of a particular neuron type can be obtained by averaging over the individual variations in neuron geometries. Connectivity in terms of candidate synaptic contacts between neurons can be determined directly on the basis of their arborizations but also indirectly on the basis of their density fields. To decide when a candidate synapse can be formed, we previously developed a criterion defining that axonal and dendritic line pieces should cross in 3D and have an orthogonal distance less than a threshold value. In this paper, we developed new methodology for applying this criterion to density fields. We show that estimates of the number of contacts between neuron pairs calculated from their density fields are fully consistent with the number of contacts calculated from the actual arborizations. However, the estimation of the connection probability and the expected number of contacts per connection cannot be calculated directly from density fields, because density fields do not carry anymore the correlative structure in the spatial distribution of synaptic contacts. Alternatively, these two connectivity measures can be estimated from the expected number of contacts by using empirical mapping functions. The neurons used for the validation studies were generated by our neuron simulator NETMORPH. An example is given of the estimation of average connectivity and Euclidean pre- and postsynaptic distance distributions in a network of neurons represented by their population mean density fields.

13.
PLoS Comput Biol ; 9(10): e1003259, 2013.
Article in English | MEDLINE | ID: mdl-24130472

ABSTRACT

Lasting alterations in sensory input trigger massive structural and functional adaptations in cortical networks. The principles governing these experience-dependent changes are, however, poorly understood. Here, we examine whether a simple rule based on the neurons' need for homeostasis in electrical activity may serve as driving force for cortical reorganization. According to this rule, a neuron creates new spines and boutons when its level of electrical activity is below a homeostatic set-point and decreases the number of spines and boutons when its activity exceeds this set-point. In addition, neurons need a minimum level of activity to form spines and boutons. Spine and bouton formation depends solely on the neuron's own activity level, and synapses are formed by merging spines and boutons independently of activity. Using a novel computational model, we show that this simple growth rule produces neuron and network changes as observed in the visual cortex after focal retinal lesions. In the model, as in the cortex, the turnover of dendritic spines was increased strongest in the center of the lesion projection zone, while axonal boutons displayed a marked overshoot followed by pruning. Moreover, the decrease in external input was compensated for by the formation of new horizontal connections, which caused a retinotopic remapping. Homeostatic regulation may provide a unifying framework for understanding cortical reorganization, including network repair in degenerative diseases or following focal stroke.


Subject(s)
Axons/physiology , Dendritic Spines/physiology , Models, Neurological , Visual Cortex/cytology , Visual Cortex/physiology , Humans , Neurons/physiology , Optic Nerve/physiology , Retina/physiology
14.
PLoS One ; 7(12): e50189, 2012.
Article in English | MEDLINE | ID: mdl-23227159

ABSTRACT

Short Term Plasticity (STP) has been shown to exist extensively in synapses throughout the brain. Its function is more or less clear in the sense that it alters the probability of synaptic transmission at short time scales. However, it is still unclear what effect STP has on the dynamics of neural networks. We show, using a novel dynamic STP model, that Short Term Depression (STD) can affect the phase of frequency coded input such that small networks can perform temporal signal summation and determination with high accuracy. We show that this property of STD can readily solve the problem of the ghost frequency, the perceived pitch of a harmonic complex in absence of the base frequency. Additionally, we demonstrate that this property can explain dynamics in larger networks. By means of two models, one of chopper neurons in the Ventral Cochlear Nucleus and one of a cortical microcircuit with inhibitory Martinotti neurons, it is shown that the dynamics in these microcircuits can reliably be reproduced using STP. Our model of STP gives important insights into the potential roles of STP in self-regulation of cortical activity and long-range afferent input in neuronal microcircuits.


Subject(s)
Neuronal Plasticity , Action Potentials , Humans , Models, Biological , Synapses/physiology
15.
PLoS Comput Biol ; 8(8): e1002666, 2012.
Article in English | MEDLINE | ID: mdl-22956901

ABSTRACT

Electrical oscillations in neuronal network activity are ubiquitous in the brain and have been associated with cognition and behavior. Intriguingly, the amplitude of ongoing oscillations, such as measured in EEG recordings, fluctuates irregularly, with episodes of high amplitude alternating with episodes of low amplitude. Despite the widespread occurrence of amplitude fluctuations in many frequency bands and brain regions, the mechanisms by which they are generated are poorly understood. Here, we show that irregular transitions between sub-second episodes of high- and low-amplitude oscillations in the alpha/beta frequency band occur in a generic neuronal network model consisting of interconnected inhibitory and excitatory cells that are externally driven by sustained cholinergic input and trains of action potentials that activate excitatory synapses. In the model, we identify the action potential drive onto inhibitory cells, which represents input from other brain areas and is shown to desynchronize network activity, to be crucial for the emergence of amplitude fluctuations. We show that the duration distributions of high-amplitude episodes in the model match those observed in rat prefrontal cortex for oscillations induced by the cholinergic agonist carbachol. Furthermore, the mean duration of high-amplitude episodes varies in a bell-shaped manner with carbachol concentration, just as in mouse hippocampus. Our results suggest that amplitude fluctuations are a general property of oscillatory neuronal networks that can arise through background input from areas external to the network.


Subject(s)
Nerve Net , Brain/physiology , Carbachol/pharmacology , Electroencephalography , Hippocampus/drug effects , Hippocampus/physiology , Models, Theoretical
16.
PLoS Comput Biol ; 8(6): e1002545, 2012.
Article in English | MEDLINE | ID: mdl-22719238

ABSTRACT

CA1 pyramidal neurons receive hundreds of synaptic inputs at different distances from the soma. Distance-dependent synaptic scaling enables distal and proximal synapses to influence the somatic membrane equally, a phenomenon called "synaptic democracy". How this is established is unclear. The backpropagating action potential (BAP) is hypothesised to provide distance-dependent information to synapses, allowing synaptic strengths to scale accordingly. Experimental measurements show that a BAP evoked by current injection at the soma causes calcium currents in the apical shaft whose amplitudes decay with distance from the soma. However, in vivo action potentials are not induced by somatic current injection but by synaptic inputs along the dendrites, which creates a different excitable state of the dendrites. Due to technical limitations, it is not possible to study experimentally whether distance information can also be provided by synaptically-evoked BAPs. Therefore we adapted a realistic morphological and electrophysiological model to measure BAP-induced voltage and calcium signals in spines after Schaffer collateral synapse stimulation. We show that peak calcium concentration is highly correlated with soma-synapse distance under a number of physiologically-realistic suprathreshold stimulation regimes and for a range of dendritic morphologies. Peak calcium levels also predicted the attenuation of the EPSP across the dendritic tree. Furthermore, we show that peak calcium can be used to set up a synaptic democracy in a homeostatic manner, whereby synapses regulate their synaptic strength on the basis of the difference between peak calcium and a uniform target value. We conclude that information derived from synaptically-generated BAPs can indicate synapse location and can subsequently be utilised to implement a synaptic democracy.


Subject(s)
Calcium Signaling/physiology , Dendrites/physiology , Models, Neurological , Synapses/physiology , Animals , CA1 Region, Hippocampal/cytology , CA1 Region, Hippocampal/physiology , Computational Biology , Computer Simulation , Evoked Potentials , Male , Rats , Rats, Wistar , Receptors, N-Methyl-D-Aspartate/physiology , alpha-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic Acid/metabolism
17.
PLoS One ; 6(10): e26586, 2011.
Article in English | MEDLINE | ID: mdl-22066001

ABSTRACT

The hippocampus is critical for a wide range of emotional and cognitive behaviors. Here, we performed the first genome-wide search for genes influencing hippocampal oscillations. We measured local field potentials (LFPs) using 64-channel multi-electrode arrays in acute hippocampal slices of 29 BXD recombinant inbred mouse strains. Spontaneous activity and carbachol-induced fast network oscillations were analyzed with spectral and cross-correlation methods and the resulting traits were used for mapping quantitative trait loci (QTLs), i.e., regions on the genome that may influence hippocampal function. Using genome-wide hippocampal gene expression data, we narrowed the QTLs to eight candidate genes, including Plcb1, a phospholipase that is known to influence hippocampal oscillations. We also identified two genes coding for calcium channels, Cacna1b and Cacna1e, which mediate presynaptic transmitter release and have not been shown to regulate hippocampal network activity previously. Furthermore, we showed that the amplitude of the hippocampal oscillations is genetically correlated with hippocampal volume and several measures of novel environment exploration.


Subject(s)
Genetic Association Studies , Hippocampus/physiology , Action Potentials/drug effects , Action Potentials/genetics , Animals , Carbachol/pharmacology , Cluster Analysis , Electrodes , Gene Expression Regulation/drug effects , Hippocampus/drug effects , In Vitro Techniques , Inheritance Patterns/drug effects , Inheritance Patterns/genetics , Locomotion/drug effects , Locomotion/genetics , Mice , Mice, Inbred Strains , Nerve Net/drug effects , Nerve Net/physiology , Organ Size/drug effects , Organ Size/genetics , Quantitative Trait Loci/drug effects , Quantitative Trait Loci/genetics , Quantitative Trait, Heritable
18.
Nat Rev Neurosci ; 12(6): 311-26, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21587288

ABSTRACT

The development of the nervous system is an extremely complex and dynamic process. Through the continuous interplay of genetic information and changing intra- and extracellular environments, the nervous system constructs itself from precursor cells that divide and form neurons, which migrate, differentiate and establish synaptic connections. Our understanding of neural development can be greatly assisted by mathematical and computational modelling, because it allows us to bridge the gap between system-level dynamics and the lower level cellular and molecular processes. This Review shows the potential of theoretical models to examine many aspects of neural development.


Subject(s)
Brain/embryology , Models, Neurological , Nerve Net/embryology , Neurons/physiology , Neurulation/physiology , Animals , Cell Differentiation/physiology , Cell Movement/physiology , Cell Proliferation
19.
J Neurosci Methods ; 195(2): 185-93, 2011 Feb 15.
Article in English | MEDLINE | ID: mdl-21167201

ABSTRACT

The shape, structure and connectivity of nerve cells are important aspects of neuronal function. Genetic and epigenetic factors that alter neuronal morphology or synaptic localization of pre- and post-synaptic proteins contribute significantly to neuronal output and may underlie clinical states. To assess the impact of individual genes and disease-causing mutations on neuronal morphology, reliable methods are needed. Unfortunately, manual analysis of immuno-fluorescence images of neurons to quantify neuronal shape and synapse number, size and distribution is labor-intensive, time-consuming and subject to human bias and error. We have developed an automated image analysis routine using steerable filters and deconvolutions to automatically analyze dendrite and synapse characteristics in immuno-fluorescence images. Our approach reports dendrite morphology, synapse size and number but also synaptic vesicle density and synaptic accumulation of proteins as a function of distance from the soma as consistent as expert observers while reducing analysis time considerably. In addition, the routine can be used to detect and quantify a wide range of neuronal organelles and is capable of batch analysis of a large number of images enabling high-throughput analysis.


Subject(s)
Electronic Data Processing/methods , Neurons/cytology , Neurons/physiology , Software , Synapses/physiology , Animals , Cells, Cultured , Dendrites/metabolism , Diagnostic Imaging , Disks Large Homolog 4 Protein , Guanylate Kinases , Hippocampus/cytology , Intracellular Signaling Peptides and Proteins/metabolism , Lysine/analogs & derivatives , Lysine/metabolism , Lysosomal Membrane Proteins/metabolism , Membrane Proteins/metabolism , Mice , Mice, Mutant Strains , Microtubule-Associated Proteins/metabolism , Munc18 Proteins/genetics , Neurites/metabolism , Neuropeptide Y/metabolism , Receptors, Transferrin/metabolism , Synaptic Vesicles/metabolism , Time Factors , Vesicle-Associated Membrane Protein 2/metabolism
20.
Front Comput Neurosci ; 4: 148, 2010.
Article in English | MEDLINE | ID: mdl-21160548

ABSTRACT

Neurons make synaptic connections at locations where axons and dendrites are sufficiently close in space. Typically the required proximity is based on the dimensions of dendritic spines and axonal boutons. Based on this principle one can search those locations in networks formed by reconstructed neurons or computer generated neurons. Candidate synapses are then located where axons and dendrites are within a given criterion distance from each other. Both experimentally reconstructed and model generated neurons are usually represented morphologically by piecewise-linear structures (line pieces or cylinders). Proximity tests are then performed on all pairs of line pieces from both axonal and dendritic branches. Applying just a test on the distance between line pieces may result in local clusters of synaptic sites when more than one pair of nearby line pieces from axonal and dendritic branches is sufficient close, and may introduce a dependency on the length scale of the individual line pieces. The present paper describes a new algorithm for defining locations of candidate synapses which is based on the crossing requirement of a line piece pair, while the length of the orthogonal distance between the line pieces is subjected to the distance criterion for testing 3D proximity.

SELECTION OF CITATIONS
SEARCH DETAIL
...