Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52
Filter
Add more filters










Publication year range
1.
Nat Commun ; 11(1): 5867, 2020 11 17.
Article in English | MEDLINE | ID: mdl-33203831

ABSTRACT

New neurons are continuously generated in the adult brain through a process called adult neurogenesis. This form of plasticity has been correlated with numerous behavioral and cognitive phenomena, but it remains unclear if and how adult-born neurons (abNs) contribute to mature neural circuits. We established a highly specific and efficient experimental system to target abNs for causal manipulations. Using this system with chemogenetics and imaging, we found that abNs effectively sharpen mitral cells (MCs) tuning and improve their power to discriminate among odors. The effects on MCs responses peaked when abNs were young and decreased as they matured. To explain the mechanism of our observations, we simulated the olfactory bulb circuit by modelling the incorporation of abNs into the circuit. We show that higher excitability and broad input connectivity, two well-characterized features of young neurons, underlie their unique ability to boost circuit computation.


Subject(s)
Neurons/physiology , Odorants , Olfactory Bulb/cytology , Age Factors , Animals , Calcium/metabolism , Evoked Potentials/physiology , Female , Mice, Inbred C57BL , Mice, Transgenic , Models, Biological , Neurogenesis/physiology , Olfactory Bulb/drug effects , Olfactory Bulb/physiology , Tamoxifen/pharmacology
2.
Article in English | MEDLINE | ID: mdl-25615132

ABSTRACT

We construct and analyze a rate-based neural network model in which self-interacting units represent clusters of neurons with strong local connectivity and random interunit connections reflect long-range interactions. When sufficiently strong, the self-interactions make the individual units bistable. Simulation results, mean-field calculations, and stability analysis reveal the different dynamic regimes of this network and identify the locations in parameter space of its phase transitions. We identify an interesting dynamical regime exhibiting transient but long-lived chaotic activity that combines features of chaotic and multiple fixed-point attractors.


Subject(s)
Models, Neurological , Nerve Net , Nerve Net/cytology , Neurons/cytology , Stochastic Processes , Time Factors
3.
Phys Rev E Stat Nonlin Soft Matter Phys ; 64(5 Pt 1): 051904, 2001 Nov.
Article in English | MEDLINE | ID: mdl-11735965

ABSTRACT

Neuronal representations of external events are often distributed across large populations of cells. We study the effect of correlated noise on the accuracy of these neuronal population codes. Our main question is whether the inherent error in the population code can be suppressed by increasing the size of the population N in the presence of correlated noise. We address this issue using a model of a population of neurons that are broadly tuned to an angular variable in two dimensions. The fluctuations in the neuronal activities are modeled as Gaussian noises with pairwise correlations that decay exponentially with the difference between the preferred angles of the correlated cells. We assume that the system is broadly tuned, which means that both the correlation length and the width of the tuning curves of the mean responses span a substantial fraction of the entire system length. The performance of the system is measured by the Fisher information (FI), which bounds its estimation error. By calculating the FI in the limit of a large N, we show that positive correlations decrease the estimation capability of the network, relative to the uncorrelated population. The information capacity saturates to a finite value as the number of cells in the population grows. In contrast, negative correlations substantially increase the information capacity of the neuronal population. These results are supplemented by the effect of correlations on the mutual information of the system. Our analysis provides an estimate of the effective number of statistically independent degrees of freedom, denoted N(eff), that a large correlated system can have. According to our theory N(eff) remains finite in the limit of a large N. Estimating the parameters of the correlations and tuning curves from experimental data in some cortical areas that code for angles, we predict that the number of effective degrees of freedom embedded in localized populations in these areas is less than or of the order of approximately 10(2).


Subject(s)
Models, Neurological , Neurons/physiology , Animals , Biophysical Phenomena , Biophysics , Haplorhini , Visual Cortex/cytology , Visual Cortex/physiology
4.
Proc Natl Acad Sci U S A ; 98(14): 8095-100, 2001 Jul 03.
Article in English | MEDLINE | ID: mdl-11427705

ABSTRACT

In several biological systems, the electrical coupling of nonoscillating cells generates synchronized membrane potential oscillations. Because the isolated cell is nonoscillating and electrical coupling tends to equalize the membrane potentials of the coupled cells, the mechanism underlying these oscillations is unclear. Here we present a dynamic mechanism by which the electrical coupling of identical nonoscillating cells can generate synchronous membrane potential oscillations. We demonstrate this mechanism by constructing a biologically feasible model of electrically coupled cells, characterized by an excitable membrane and calcium dynamics. We show that strong electrical coupling in this network generates multiple oscillatory states with different spatio-temporal patterns and discuss their possible role in the cooperative computations performed by the system.


Subject(s)
Membrane Potentials/physiology , Models, Biological , Nerve Net/physiology , Animals , Calcium/physiology , Electrophysiology , Humans
5.
Phys Rev Lett ; 86(21): 4958-61, 2001 May 21.
Article in English | MEDLINE | ID: mdl-11384391

ABSTRACT

We studied the mutual information between a stimulus and a system consisting of stochastic, statistically independent elements that respond to a stimulus. Using statistical mechanical methods the properties of the mutual information (MI) in the limit of a large system size N are calculated. For continuous valued stimuli, the MI increases logarithmically with N and is related to the log of the Fisher information of the system. For discrete stimuli the MI saturates exponentially with N. We find that the exponent of saturation of the MI is the Chernoff distance between response probabilities that are induced by different stimuli.


Subject(s)
Models, Neurological , Probability , Brain/physiology , Data Interpretation, Statistical , Electronic Data Processing , Mental Processes/physiology , Neurons/physiology , Stochastic Processes
6.
Phys Rev Lett ; 86(2): 364-7, 2001 Jan 08.
Article in English | MEDLINE | ID: mdl-11177832

ABSTRACT

A theory of temporally asymmetric Hebb rules, which depress or potentiate synapses depending upon whether the postsynaptic cell fires before or after the presynaptic one, is presented. Using the Fokker-Planck formalism, we show that the equilibrium synaptic distribution induced by such rules is highly sensitive to the manner in which bounds on the allowed range of synaptic values are imposed. In a biologically plausible multiplicative model, the synapses in asynchronous networks reach a distribution that is invariant to the firing rates of either the presynaptic or postsynaptic cells. When these cells are temporally correlated, the synaptic strength varies smoothly with the degree and phase of their synchrony.


Subject(s)
Models, Neurological , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/physiology , Animals , Cerebral Cortex/physiology , Mathematics
7.
Article in English | MEDLINE | ID: mdl-11046469

ABSTRACT

Previous derivation of the Thouless-Anderson-Palmer (TAP) equations for the Hopfield model by the cavity method yielded results that were inconsistent with those of the perturbation theory as well as the results derived by the replica theory of the model. Here we present a derivation of the TAP equation for the Hopfield model by the cavity method and show that it agrees with the form derived by perturbation theory. We also use the cavity method to derive TAP equations for the pseudoinverse neural network model. These equations are consistent with the results of the replica theory of these models.

8.
Neural Comput ; 10(6): 1321-71, 1998 Aug 15.
Article in English | MEDLINE | ID: mdl-9698348

ABSTRACT

The nature and origin of the temporal irregularity in the electrical activity of cortical neurons in vivo are not well understood. We consider the hypothesis that this irregularity is due to a balance of excitatory and inhibitory currents into the cortical cells. We study a network model with excitatory and inhibitory populations of simple binary units. The internal feedback is mediated by relatively large synaptic strengths, so that the magnitude of the total excitatory and inhibitory feedback is much larger than the neuronal threshold. The connectivity is random and sparse. The mean number of connections per unit is large, though small compared to the total number of cells in the network. The network also receives a large, temporally regular input from external sources. We present an analytical solution of the mean-field theory of this model, which is exact in the limit of large network size. This theory reveals a new cooperative stationary state of large networks, which we term a balanced state. In this state, a balance between the excitatory and inhibitory inputs emerges dynamically for a wide range of parameters, resulting in a net input whose temporal fluctuations are of the same order as its mean. The internal synaptic inputs act as a strong negative feedback, which linearizes the population responses to the external drive despite the strong nonlinearity of the individual cells. This feedback also greatly stabilizes the system's state and enables it to track a time-dependent input on time scales much shorter than the time constant of a single cell. The spatiotemporal statistics of the balanced state are calculated. It is shown that the autocorrelations decay on a short time scale, yielding an approximate Poissonian temporal statistics. The activity levels of single cells are broadly distributed, and their distribution exhibits a skewed shape with a long power-law tail. The chaotic nature of the balanced state is revealed by showing that the evolution of the microscopic state of the network is extremely sensitive to small deviations in its initial conditions. The balanced state generated by the sparse, strong connections is an asynchronous chaotic state. It is accompanied by weak spatial cross-correlations, the strength of which vanishes in the limit of large network size. This is in contrast to the synchronized chaotic states exhibited by more conventional network models with high connectivity of weak synapses.


Subject(s)
Cerebral Cortex/physiology , Models, Neurological , Neurons/physiology , Nonlinear Dynamics , Animals , Cerebral Cortex/cytology , Sensory Thresholds/physiology , Synapses/physiology , Time Factors
9.
Curr Opin Neurobiol ; 7(4): 514-22, 1997 Aug.
Article in English | MEDLINE | ID: mdl-9287203

ABSTRACT

Since the discovery of orientation selectivity by Hubel and Wiesel, the mechanisms responsible for this remarkable operation in the visual cortex have been controversial. Experimental studies over the past year have highlighted the contribution of feedforward thalamo-cortical afferents, as proposed originally by Hubel and Wiesel, but they have also indicated that this contribution alone is insufficient to account for the sharp orientation tuning observed in the visual cortex. Recent advances in understanding the functional architecture of local cortical circuitry have led to new proposals for the involvement of intracortical recurrent excitation and inhibition in orientation selectivity. Establishing how these two mechanisms work together remains an important experimental and theoretical challenge.


Subject(s)
Orientation/physiology , Space Perception/physiology , Animals , Geniculate Bodies/physiology , Humans , Models, Neurological , Visual Cortex/physiology
10.
J Comput Neurosci ; 4(1): 57-77, 1997 Jan.
Article in English | MEDLINE | ID: mdl-9046452

ABSTRACT

Recent studies have shown that local cortical feedback can have an important effect on the response of neurons in primary visual cortex to the orientation of visual stimuli. In this work, we study the role of the cortical feedback in shaping the spatiotemporal patterns of activity in cortex. Two questions are addressed: one, what are the limitations on the ability of cortical neurons to lock their activity to rotating oriented stimuli within a single receptive field? Two, can the local architecture of visual cortex lead to the generation of spontaneous traveling pulses of activity? We study these issues analytically by a population-dynamic model of a hypercolumn in visual cortex. The order parameter that describes the macroscopic behavior of the network is the time-dependent population vector of the network. We first study the network dynamics under the influence of a weakly tuned input that slowly rotates within the receptive field. We show that if the cortical interactions have strong spatial modulation, the network generates a sharply tuned activity profile that propagates across the hypercolumn in a path that is completely locked to the stimulus rotation. The resultant rotating population vector maintains a constant angular lag relative to the stimulus, the magnitude of which grows with the stimulus rotation frequency. Beyond a critical frequency the population vector does not lock to the stimulus but executes a queasi-periodic motion with an average frequency that is smaller than that of the stimulus. In the second part we consider the stable intrinsic state of the cortex under the influence of isotropic stimulation. We show that if the local inhibitory feedback is sufficiently strong, the network does not settle into a stationary state but develops spontaneous traveling pulses of activity. Unlike recent models of wave propagation in cortical networks, the connectivity pattern in our model is spatially symmetric, hence the direction of propagation of these waves is arbitrary. The interaction of these waves with an external-oriented stimulus is studied. It is shown that the system can lock to a weakly tuned rotating stimulus if the stimulus frequency is close to the frequency of the intrinsic wave.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neurons/physiology , Visual Cortex/physiology , Animals , Computer Simulation , Feedback , Models, Theoretical , Orientation
11.
Science ; 274(5293): 1724-6, 1996 Dec 06.
Article in English | MEDLINE | ID: mdl-8939866

ABSTRACT

Neurons in the cortex of behaving animals show temporally irregular spiking patterns. The origin of this irregularity and its implications for neural processing are unknown. The hypothesis that the temporal variability in the firing of a neuron results from an approximate balance between its excitatory and inhibitory inputs was investigated theoretically. Such a balance emerges naturally in large networks of excitatory and inhibitory neuronal populations that are sparsely connected by relatively strong synapses. The resulting state is characterized by strongly chaotic dynamics, even when the external inputs to the network are constant in time. Such a network exhibits a linear response, despite the highly nonlinear dynamics of single neurons, and reacts to changing external stimuli on time scales much smaller than the integration time constant of a single neuron.


Subject(s)
Cerebral Cortex/physiology , Nerve Net/physiology , Neurons/physiology , Nonlinear Dynamics , Synapses/physiology , Animals , Cerebral Cortex/cytology , Haplorhini , Models, Neurological , Prefrontal Cortex/physiology
12.
Phys Rev Lett ; 76(16): 3021-3024, 1996 Apr 15.
Article in English | MEDLINE | ID: mdl-10060850
13.
J Comput Neurosci ; 3(1): 7-34, 1996 Mar.
Article in English | MEDLINE | ID: mdl-8717487

ABSTRACT

Neurons in cortical slices emit spikes or bursts of spikes regularly in response to a suprathreshold current injection. This behavior is in marked contrast to the behavior of cortical neurons in vivo, whose response to electrical or sensory input displays a strong degree of irregularity. Correlation measurements show a significant degree of synchrony in the temporal fluctuations of neuronal activities in cortex. We explore the hypothesis that these phenomena are the result of the synchronized chaos generated by the deterministic dynamics of local cortical networks. A model of a "hypercolumn" in the visual cortex is studied. It consists of two populations of neurons, one inhibitory and one excitatory. The dynamics of the neurons is based on a Hodgkin-Huxley type model of excitable voltage-clamped cells with several cellular and synaptic conductances. A slow potassium current is included in the dynamics of the excitatory population to reproduce the observed adaptation of the spike trains emitted by these neurons. The pattern of connectivity has a spatial structure which is correlated with the internal organization of hypercolumns in orientation columns. Numerical simulations of the model show that in an appropriate parameter range, the network settles in a synchronous chaotic state, characterized by a strong temporal variability of the neural activity which is correlated across the hypercolumn. Strong inhibitory feedback is essential for the stabilization of this state. These results show that the cooperative dynamics of large neuronal networks are capable of generating variability and synchrony similar to those observed in cortex. Auto-correlation and cross-correlation functions of neuronal spike trains are computed, and their temporal and spatial features are analyzed. In other parameter regimes, the network exhibits two additional states: synchronized oscillations and an asynchronous state. We use our model to study cortical mechanisms for orientation selectivity. It is shown that in a suitable parameter regime, when the input is not oriented, the network has a continuum of states, each representing an inhomogeneous population activity which is peaked at one of the orientation columns. As a result, when a weakly oriented input stimulates the network, it yields a sharp orientation tuning. The properties of the network in this regime, including the appearance of virtual rotations and broad stimulus-dependent cross-correlations, are investigated. The results agree with the predictions of the mean field theory which was previously derived for a simplified model of stochastic, two-state neurons. The relation between the results of the model and experiments in visual cortex are discussed.


Subject(s)
Models, Neurological , Visual Cortex/physiology , Neurons/physiology
14.
Neural Comput ; 8(2): 270-99, 1996 Feb 15.
Article in English | MEDLINE | ID: mdl-8581884

ABSTRACT

We study neural network models of discriminating between stimuli with two similar angles, using the two-alternative forced choice (2AFC) paradigm. Two network architectures are investigated: a two-layer perceptron network and a gating network. In the two-layer network all hidden units contribute to the decision at all angles, while in the other architecture the gating units select, for each stimulus, the appropriate hidden units that will dominate the decision. We find that both architectures can perform the task reasonably well for all angles. Perceptual learning has been modeled by training the networks to perform the task, using unsupervised Hebb learning algorithms with pairs of stimuli at fixed angles theta and delta theta. Perceptual transfer is studied by measuring the performance of the network on stimuli with theta' not equal to theta. The two-layer perceptron shows a partial transfer for angles that are within a distance a from theta, where a is the angular width of the input tuning curves. The change in performance due to learning is positive for angles close to theta, but for magnitude of theta-theta' approximately a it is negative, i.e., its performance after training is worse than before. In contrast, negative transfer can be avoided in the gating network by limiting the effects of learning to hidden units that are optimized for angles that are close to the trained angle.


Subject(s)
Discrimination, Psychological/physiology , Learning/physiology , Neural Networks, Computer , Perception/physiology , Likelihood Functions
15.
Phys Rev Lett ; 75(7): 1415-1418, 1995 Aug 14.
Article in English | MEDLINE | ID: mdl-10060287
16.
Proc Natl Acad Sci U S A ; 92(9): 3844-8, 1995 Apr 25.
Article in English | MEDLINE | ID: mdl-7731993

ABSTRACT

The role of intrinsic cortical connections in processing sensory input and in generating behavioral output is poorly understood. We have examined this issue in the context of the tuning of neuronal responses in cortex to the orientation of a visual stimulus. We analytically study a simple network model that incorporates both orientation-selective input from the lateral geniculate nucleus and orientation-specific cortical interactions. Depending on the model parameters, the network exhibits orientation selectivity that originates from within the cortex, by a symmetry-breaking mechanism. In this case, the width of the orientation tuning can be sharp even if the lateral geniculate nucleus inputs are only weakly anisotropic. By using our model, several experimental consequences of this cortical mechanism of orientation tuning are derived. The tuning width is relatively independent of the contrast and angular anisotropy of the visual stimulus. The transient population response to changing of the stimulus orientation exhibits a slow "virtual rotation." Neuronal cross-correlations exhibit long time tails, the sign of which depends on the preferred orientations of the cells and the stimulus orientation.


Subject(s)
Models, Neurological , Models, Theoretical , Neurons/physiology , Orientation , Visual Cortex/physiology , Animals , Geniculate Bodies/physiology , Nerve Net/physiology , Vision, Ocular , Visual Perception
19.
Proc Natl Acad Sci U S A ; 90(22): 10749-53, 1993 Nov 15.
Article in English | MEDLINE | ID: mdl-8248166

ABSTRACT

In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models.


Subject(s)
Neurons, Afferent/physiology , Perception/physiology , Animals , Humans , Likelihood Functions , Models, Theoretical , Nerve Net , Orientation/physiology , Stochastic Processes
20.
Phys Rev Lett ; 71(17): 2710-2713, 1993 Oct 25.
Article in English | MEDLINE | ID: mdl-10054756
SELECTION OF CITATIONS
SEARCH DETAIL
...