Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 10536, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38719897

ABSTRACT

Precisely timed and reliably emitted spikes are hypothesized to serve multiple functions, including improving the accuracy and reproducibility of encoding stimuli, memories, or behaviours across trials. When these spikes occur as a repeating sequence, they can be used to encode and decode a potential time series. Here, we show both analytically and in simulations that the error incurred in approximating a time series with precisely timed and reliably emitted spikes decreases linearly with the number of neurons or spikes used in the decoding. This was verified numerically with synthetically generated patterns of spikes. Further, we found that if spikes were imprecise in their timing, or unreliable in their emission, the error incurred in decoding with these spikes would be sub-linear. However, if the spike precision or spike reliability increased with network size, the error incurred in decoding a time-series with sequences of spikes would maintain a linear decrease with network size. The spike precision had to increase linearly with network size, while the probability of spike failure had to decrease with the square-root of the network size. Finally, we identified a candidate circuit to test this scaling relationship: the repeating sequences of spikes with sub-millisecond precision in area HVC (proper name) of the zebra finch. This scaling relationship can be tested using both neural data and song-spectrogram-based recordings while taking advantage of the natural fluctuation in HVC network size due to neurogenesis.


Subject(s)
Action Potentials , Models, Neurological , Neurons , Animals , Action Potentials/physiology , Neurons/physiology , Vocalization, Animal/physiology , Reproducibility of Results
2.
Chaos ; 34(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717399

ABSTRACT

Neuronal activity gives rise to behavior, and behavior influences neuronal dynamics, in a closed-loop control system. Is it possible then, to find a relationship between the statistical properties of behavior and neuronal dynamics? Measurements of neuronal activity and behavior have suggested a direct relationship between scale-free neuronal and behavioral dynamics. Yet, these studies captured only local dynamics in brain sub-networks. Here, we investigate the relationship between internal dynamics and output statistics in a mathematical model system where we have access to the dynamics of all network units. We train a recurrent neural network (RNN), initialized in a high-dimensional chaotic state, to sustain behavioral states for durations following a power-law distribution as observed experimentally. Changes in network connectivity due to training affect the internal dynamics of neuronal firings, leading to neuronal avalanche size distributions approximating power-laws over some ranges. Yet, randomizing the changes in network connectivity can leave these power-law features largely unaltered. Specifically, whereas neuronal avalanche duration distributions show some variations between RNNs with trained and randomized decoders, neuronal avalanche size distributions are invariant, in the total population and in output-correlated sub-populations. This is true independent of whether the randomized decoders preserve power-law distributed behavioral dynamics. This demonstrates that a one-to-one correspondence between the considered statistical features of behavior and neuronal dynamics cannot be established and their relationship is non-trivial. Our findings also indicate that statistical properties of the intrinsic dynamics may be preserved, even as the internal state responsible for generating the desired output dynamics is perturbed.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology , Neural Networks, Computer , Nerve Net/physiology , Nonlinear Dynamics , Behavior , Humans , Animals
3.
Chaos ; 34(2)2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38377288

ABSTRACT

Real neurons connect to each other non-randomly. These connectivity graphs can potentially impact the ability of networks to synchronize, along with the dynamics of neurons and the dynamics of their connections. How the connectivity of networks of conductance-based neuron models like the classical Hodgkin-Huxley model or the Morris-Lecar model impacts synchronizability remains unknown. One powerful tool to resolve the synchronizability of these networks is the master stability function (MSF). Here, we apply and extend the MSF approach to networks of Morris-Lecar neurons with conductance-based coupling to determine under which parameters and for which graphs the synchronous solutions are stable. We consider connectivity graphs with a constant non-zero row sum, where the MSF approach can be readily extended to conductance-based synapses rather than the more well-studied diffusive connectivity case, which primarily applies to gap junction connectivity. In this formulation, the synchronous solution is a single, self-coupled, or "autaptic" neuron. We find that the primary determining parameter for the stability of the synchronous solution is, unsurprisingly, the reversal potential, as it largely dictates the excitatory/inhibitory potential of a synaptic connection. However, the change between "excitatory" and "inhibitory" synapses is rapid, with only a few millivolts separating stability and instability of the synchronous state for most graphs. We also find that for specific coupling strengths (as measured by the global synaptic conductance), islands of synchronizability in the MSF can emerge for inhibitory connectivity. We verified the stability of these islands by direct simulation of pairs of neurons coupled with eigenvalues in the matching spectrum.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology , Synaptic Transmission/physiology , Computer Simulation , Synapses/physiology , Action Potentials/physiology , Nerve Net/physiology
4.
Nat Commun ; 14(1): 8522, 2023 Dec 21.
Article in English | MEDLINE | ID: mdl-38129411

ABSTRACT

Recalling a salient experience provokes specific behaviors and changes in the physiology or internal state. Relatively little is known about how physiological memories are encoded. We examined the neural substrates of physiological memory by probing CRHPVN neurons of mice, which control the endocrine response to stress. Here we show these cells exhibit contextual memory following exposure to a stimulus with negative or positive valence. Specifically, a negative stimulus invokes a two-factor learning rule that favors an increase in the activity of weak cells during recall. In contrast, the contextual memory of positive valence relies on a one-factor rule to decrease activity of CRHPVN neurons. Finally, the aversive memory in CRHPVN neurons outlasts the behavioral response. These observations provide information about how specific physiological memories of aversive and appetitive experience are represented and demonstrate that behavioral readouts may not accurately reflect physiological changes invoked by the memory of salient experiences.


Subject(s)
Corticotropin-Releasing Hormone , Paraventricular Hypothalamic Nucleus , Mice , Animals , Corticotropin-Releasing Hormone/metabolism , Paraventricular Hypothalamic Nucleus/metabolism , Hypothalamus/metabolism , Neurons/metabolism , Stress, Physiological
5.
J Physiol ; 601(15): 3151-3171, 2023 08.
Article in English | MEDLINE | ID: mdl-36223200

ABSTRACT

Electrophysiological recordings can provide detailed information of single neurons' dynamical features and shed light on their response to stimuli. Unfortunately, rapidly modelling electrophysiological data for inferring network-level behaviours remains challenging. Here, we investigate how modelled single neuron dynamics leads to network-level responses in the paraventricular nucleus of the hypothalamus (PVN), a critical nucleus for the mammalian stress response. Recordings of corticotropin releasing hormone neurons from the PVN (CRHPVN ) were performed using whole-cell current-clamp. These, neurons, which initiate the endocrine response to stress, were rapidly and automatically fit to a modified adaptive exponential integrate-and-fire model (AdEx) with particle swarm optimization (PSO). All CRHPVN neurons were accurately fit by the AdEx model with PSO. Multiple sets of parameters were found that reliably reproduced current-clamp traces for any single neuron. Despite multiple solutions, the dynamical features of the models such as the rheobase, fixed points, and bifurcations, were shown to be stable across fits. We found that CRHPVN neurons can be divided into two subtypes according to their bifurcation at the onset of firing: CRHPVN -integrators and CRHPVN -resonators. The existence of CRHPVN -resonators was then directly confirmed in a follow-up patch-clamp hyperpolarization protocol which readily induced post-inhibitory rebound spiking in 33% of patched neurons. We constructed networks of CRHPVN model neurons to investigate the network level responses of CRHPVN neurons. We found that CRHPVN -resonators maintain baseline firing in networks even when all inputs are inhibitory. The dynamics of a small subset of CRHPVN neurons may be critical to maintaining a baseline firing tone in the PVN. KEY POINTS: Corticotropin-releasing hormone neurons (CRHPVN ) in the paraventricular nucleus of the hypothalamus act as the final neural controllers of the stress response. We developed a computational modelling platform that uses particle swarm optimization to rapidly and accurately fit biophysical neuron models to patched CRHPVN neurons. A model was fitted to each patched neuron without the use of dynamic clamping, or other procedures requiring sophisticated inputs and fitting algorithms. Any neuron undergoing standard current clamp step protocols for a few minutes can be fitted by this procedure The dynamical analysis of the modelled neurons shows that CRHPVN neurons come in two specific 'flavours': CRHPVN -resonators and CRHPVN -integrators. We directly confirmed the existence of these two classes of CRHPVN neurons in subsequent experiments. Network simulations show that CRHPVN -resonators are critical to retaining the baseline firing rate of the entire network of CRHPVN neurons as these cells can fire rebound spikes and bursts in the presence of strong inhibitory synaptic input.


Subject(s)
Corticotropin-Releasing Hormone , Paraventricular Hypothalamic Nucleus , Corticotropin-Releasing Hormone/metabolism , Hypothalamus/metabolism , Neurons/physiology
6.
Sci Rep ; 12(1): 20720, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36456619

ABSTRACT

Despite great advances in explaining synaptic plasticity and neuron function, a complete understanding of the brain's learning algorithms is still missing. Artificial neural networks provide a powerful learning paradigm through the backpropagation algorithm which modifies synaptic weights by using feedback connections. Backpropagation requires extensive communication of information back through the layers of a network. This has been argued to be biologically implausible and it is not clear whether backpropagation can be realized in the brain. Here we suggest that biophotons guided by axons provide a potential channel for backward transmission of information in the brain. Biophotons have been experimentally shown to be produced in the brain, yet their purpose is not understood. We propose that biophotons can propagate from each post-synaptic neuron to its pre-synaptic one to carry the required information backward. To reflect the stochastic character of biophoton emissions, our model includes the stochastic backward transmission of teaching signals. We demonstrate that a three-layered network of neurons can learn the MNIST handwritten digit classification task using our proposed backpropagation-like algorithm with stochastic photonic feedback. We model realistic restrictions and show that our system still learns the task for low rates of biophoton emission, information-limited (one bit per photon) backward transmission, and in the presence of noise photons. Our results suggest a new functionality for biophotons and provide an alternate mechanism for backward transmission in the brain.


Subject(s)
Axons , Brain , Photons , Neurons , Neuronal Plasticity
7.
Cell Rep ; 36(5): 109405, 2021 08 03.
Article in English | MEDLINE | ID: mdl-34348138

ABSTRACT

Very-low-frequency oscillations in microvascular diameter cause fluctuations in oxygen delivery that are important for fueling the brain and for functional imaging. However, little is known about how the brain regulates ongoing oscillations in cerebral blood flow. In mouse and rat cortical brain slice arterioles, we find that selectively enhancing tone is sufficient to recruit a TRPV4-mediated Ca2+ elevation in adjacent astrocyte endfeet. This endfoot Ca2+ signal triggers COX-1-mediated "feedback vasodilators" that limit the extent of evoked vasoconstriction, as well as constrain fictive vasomotion in slices. Astrocyte-Ptgs1 knockdown in vivo increases the power of arteriole oscillations across a broad range of very low frequencies (0.01-0.3 Hz), including ultra-slow vasomotion (∼0.1 Hz). Conversely, clamping astrocyte Ca2+in vivo reduces the power of vasomotion. These data demonstrate bidirectional communication between arterioles and astrocyte endfeet to regulate oscillatory microvasculature activity.


Subject(s)
Arterioles/physiology , Astrocytes/physiology , Cyclooxygenase 1/metabolism , Feedback, Physiological , Stress, Mechanical , TRPV Cation Channels/metabolism , Animals , Calcium/metabolism , Female , Male , Mice, Inbred C57BL , Rats, Sprague-Dawley , Vasoconstriction , Vasodilation
8.
Front Syst Neurosci ; 15: 688517, 2021.
Article in English | MEDLINE | ID: mdl-34290593

ABSTRACT

The human brain constitutes one of the most advanced networks produced by nature, consisting of billions of neurons communicating with each other. However, this communication is not in real-time, with different communication or time-delays occurring between neurons in different brain areas. Here, we investigate the impacts of these delays by modeling large interacting neural circuits as neural-field systems which model the bulk activity of populations of neurons. By using a Master Stability Function analysis combined with numerical simulations, we find that delays (1) may actually stabilize brain dynamics by temporarily preventing the onset to oscillatory and pathologically synchronized dynamics and (2) may enhance or diminish synchronization depending on the underlying eigenvalue spectrum of the connectivity matrix. Real eigenvalues with large magnitudes result in increased synchronizability while complex eigenvalues with large magnitudes and positive real parts yield a decrease in synchronizability in the delay vs. instantaneously coupled case. This result applies to networks with fixed, constant delays, and was robust to networks with heterogeneous delays. In the case of real brain networks, where the eigenvalues are predominantly real, owing to the nearly symmetric nature of these weight matrices, biologically plausible, small delays, are likely to increase synchronization, rather than decreasing it.

9.
Nat Neurosci ; 22(7): 1168-1181, 2019 07.
Article in English | MEDLINE | ID: mdl-31235906

ABSTRACT

The hippocampus is able to rapidly learn incoming information, even if that information is only observed once. Furthermore, this information can be replayed in a compressed format in either forward or reverse modes during sharp wave-ripples (SPW-Rs). We leveraged state-of-the-art techniques in training recurrent spiking networks to demonstrate how primarily interneuron networks can achieve the following: (1) generate internal theta sequences to bind externally elicited spikes in the presence of inhibition from the medial septum; (2) compress learned spike sequences in the form of a SPW-R when septal inhibition is removed; (3) generate and refine high-frequency assemblies during SPW-R-mediated compression; and (4) regulate the inter-SPW interval timing between SPW-Rs in ripple clusters. From the fast timescale of neurons to the slow timescale of behaviors, interneuron networks serve as the scaffolding for one-shot learning by replaying, reversing, refining, and regulating spike sequences.


Subject(s)
CA3 Region, Hippocampal/physiology , Computer Simulation , Interneurons/physiology , Learning/physiology , Neural Networks, Computer , Neuronal Plasticity/physiology , Memory/physiology , Septal Nuclei/physiology , Time Factors
10.
Chaos ; 28(8): 083104, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30180641

ABSTRACT

Low-dimensional yet rich dynamics often emerge in the brain. Examples include oscillations and chaotic dynamics during sleep, epilepsy, and voluntary movement. However, a general mechanism for the emergence of low dimensional dynamics remains elusive. Here, we consider Wilson-Cowan networks and demonstrate through numerical and analytical work that homeostatic regulation of the network firing rates can paradoxically lead to a rich dynamical repertoire. The dynamics include mixed-mode oscillations, mixed-mode chaos, and chaotic synchronization when the homeostatic plasticity operates on a moderately slower time scale than the firing rates. This is true for a single recurrently coupled node, pairs of reciprocally coupled nodes without self-coupling, and networks coupled through experimentally determined weights derived from functional magnetic resonance imaging data. In all cases, the stability of the homeostatic set point is analytically determined or approximated. The dynamics at the network level are directly determined by the behavior of a single node system through synchronization in both oscillatory and non-oscillatory states. Our results demonstrate that rich dynamics can be preserved under homeostatic regulation or even be caused by homeostatic regulation.


Subject(s)
Biological Clocks , Brain Waves , Brain/physiopathology , Models, Neurological , Nerve Net/physiopathology , Nonlinear Dynamics , Brain/diagnostic imaging
11.
PLoS Comput Biol ; 14(3): e1006025, 2018 03.
Article in English | MEDLINE | ID: mdl-29529034

ABSTRACT

Cortical oscillations are thought to be involved in many cognitive functions and processes. Several mechanisms have been proposed to regulate oscillations. One prominent but understudied mechanism is gap junction coupling. Gap junctions are ubiquitous in cortex between GABAergic interneurons. Moreover, recent experiments indicate their strength can be modified in an activity-dependent manner, similar to chemical synapses. We hypothesized that activity-dependent gap junction plasticity acts as a mechanism to regulate oscillations in the cortex. We developed a computational model of gap junction plasticity in a recurrent cortical network based on recent experimental findings. We showed that gap junction plasticity can serve as a homeostatic mechanism for oscillations by maintaining a tight balance between two network states: asynchronous irregular activity and synchronized oscillations. This homeostatic mechanism allows for robust communication between neuronal assemblies through two different mechanisms: transient oscillations and frequency modulation. This implies a direct functional role for gap junction plasticity in information transmission in cortex.


Subject(s)
Computational Biology/methods , Cortical Synchronization/physiology , Electrical Synapses/physiology , Action Potentials/physiology , Animals , Computer Simulation , Gap Junctions/physiology , Humans , Interneurons/physiology , Nerve Net/physiology , Neuronal Plasticity/physiology , Neurons/physiology , Synapses , Synaptic Transmission/physiology
12.
Nat Commun ; 8(1): 2208, 2017 12 20.
Article in English | MEDLINE | ID: mdl-29263361

ABSTRACT

Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.


Subject(s)
Nerve Net , Songbirds/physiology , Vocalization, Animal , Animals , Learning , Models, Neurological , Supervised Machine Learning
13.
Front Comput Neurosci ; 10: 15, 2016.
Article in English | MEDLINE | ID: mdl-26973503

ABSTRACT

A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

14.
J Comput Neurosci ; 35(1): 87-108, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23430291

ABSTRACT

Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.


Subject(s)
Action Potentials/physiology , Computer Simulation , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Animals
15.
Front Comput Neurosci ; 7: 184, 2013.
Article in English | MEDLINE | ID: mdl-24416013

ABSTRACT

We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

16.
J Comput Neurosci ; 33(1): 21-40, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22131133

ABSTRACT

The hippocampus is a brain structure critical for memory functioning. Its network dynamics include several patterns such as sharp waves that are generated in the CA3 region. To understand how population outputs are generated, models need to consider aspects of network size, cellular and synaptic characteristics and context, which are necessarily 'balanced' in appropriate ways to produce particular outputs. Thick slice hippocampal preparations spontaneously produce sharp waves that are initiated in CA3 regions and depend on the right balance of glutamatergic activities. As a step toward developing network models that can explain important balances in the generation of hippocampal output, we develop models of CA3 pyramidal cells. Our models are single compartment in nature, use an Izhikevich-type structure and involve parameter values that are specifically designed to encompass CA3 intrinsic properties. Importantly, they incorporate spike frequency adaptation characteristics that are directly comparable to those measured experimentally. Excitatory networks using these model cells are able to produce bursting suggesting that the amount of spike frequency adaptation expressed in the biological cells is an essential contributor to network bursting, and as such, may be important for sharp wave generation. The network bursting mechanism is numerically dissected showing the critical balance between adaptation and excitatory drive. The compact nature of our models allows large network simulations to be efficiently computed. This, together with the linkage of our models to cellular characteristics, will allow us to develop an understanding of population output of CA3 hippocampus with direct biological comparisons.


Subject(s)
Action Potentials/physiology , CA3 Region, Hippocampal/cytology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Animals , Computer Simulation , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...