Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Comput Neurosci ; 34(2): 273-83, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22907135

RESUMO

Neural populations across cortical layers perform different computational tasks. However, it is not known whether information in different layers is encoded using a common neural code or whether it depends on the specific layer. Here we studied the laminar distribution of information in a large-scale computational model of cat primary visual cortex. We analyzed the amount of information about the input stimulus conveyed by the different representations of the cortical responses. In particular, we compared the information encoded in four possible neural codes: (1) the information carried by the firing rate of individual neurons; (2) the information carried by spike patterns within a time window; (3) the rate-and-phase information carried by the firing rate labelled by the phase of the Local Field Potentials (LFP); (4) the pattern-and-phase information carried by the spike patterns tagged with the LFP phase. We found that there is substantially more information in the rate-and-phase code compared with the firing rate alone for low LFP frequency bands (less than 30 Hz). When comparing how information is encoded across layers, we found that the extra information contained in a rate-and-phase code may reach 90 % in Layer 4, while in other layers it reaches only 60 %, compared to the information carried by the firing rate alone. These results suggest that information processing in primary sensory cortices could rely on different coding strategies across different layers.


Assuntos
Simulação por Computador , Teoria da Informação , Modelos Neurológicos , Neurônios/fisiologia , Córtex Visual/citologia , Vias Visuais/fisiologia , Potenciais de Ação/fisiologia , Animais , Gatos , Estimulação Luminosa , Retina/citologia , Retina/fisiologia
2.
Adv Exp Med Biol ; 718: 33-9, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21744208

RESUMO

In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.


Assuntos
Potenciais de Ação , Modelos Teóricos , Plasticidade Neuronal , Neurônios/fisiologia
3.
Neural Comput ; 18(6): 1349-79, 2006 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-16764507

RESUMO

Cortical sensory neurons are known to be highly variable, in the sense that responses evoked by identical stimuli often change dramatically from trial to trial. The origin of this variability is uncertain, but it is usually interpreted as detrimental noise that reduces the computational accuracy of neural circuits. Here we investigate the possibility that such response variability might in fact be beneficial, because it may partially compensate for a decrease in accuracy due to stochastic changes in the synaptic strengths of a network. We study the interplay between two kinds of noise, response (or neuronal) noise and synaptic noise, by analyzing their joint influence on the accuracy of neural networks trained to perform various tasks. We find an interesting, generic interaction: when fluctuations in the synaptic connections are proportional to their strengths (multiplicative noise), a certain amount of response noise in the input neurons can significantly improve network performance, compared to the same network without response noise. Performance is enhanced because response noise and multiplicative synaptic noise are in some ways equivalent. So if the algorithm used to find the optimal synaptic weights can take into account the variability of the model neurons, it can also take into account the variability of the synapses. Thus, the connection patterns generated with response noise are typically more resistant to synaptic degradation than those obtained without response noise. As a consequence of this interplay, if multiplicative synaptic noise is present, it is better to have response noise in the network than not to have it. These results are demonstrated analytically for the most basic network consisting of two input neurons and one output neuron performing a simple classification task, but computer simulations show that the phenomenon persists in a wide range of architectures, including recurrent (attractor) networks and sensorimotor networks that perform coordinate transformations. The results suggest that response variability could play an important dynamic role in networks that continuously learn.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios Aferentes/fisiologia , Sinapses/fisiologia , Algoritmos , Animais , Artefatos , Córtex Cerebral/citologia , Córtex Cerebral/fisiologia , Humanos , Neurônios Motores/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...