Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 14(6-7): 825-34, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-11665774

RESUMO

Here, we develop and investigate a computational model of a network of cortical neurons on the base of biophysically well constrained and tested two-compartmental neurons developed by Pinsky and Rinzel [Pinsky, P. F., & Rinzel, J. (1994). Intrinsic and network rhythmogenesis in a reduced Traub model for CA3 neurons. Journal of Computational Neuroscience, 1, 39-60]. To study associative memory, we connect a pool of cells by a structured connectivity matrix. The connection weights are shaped by simple Hebbian coincidence learning using a set of spatially sparse patterns. We study the neuronal activity processes following an external stimulation of a stored memory. In two series of simulation experiments, we explore the effect of different classes of external input, tonic and flashed stimulation. With tonic stimulation, the addressed memory is an attractor of the network dynamics. The memory is displayed rhythmically, coded by phase-locked bursts or regular spikes. The participating neurons have rhythmic activity in the gamma-frequency range (30-80 Hz). If the input is switched from one memory to another, the network activity can follow this change within one or two gamma cycles. Unlike similar models in the literature, we studied the range of high memory capacity (in the order of 0.1 bit/synapse), comparable to optimally tuned formal associative networks. We explored the robustness of efficient retrieval varying the memory load, the excitation/inhibition parameters, and background activity. A stimulation pulse applied to the identical simulation network can push away ongoing network activity and trigger a phase-locked association event within one gamma period. Unlike as under tonic stimulation, the memories are not attractors. After one association process, the network activity moves to other states. Applying in close succession pulses addressing different memories, one can switch through the space of memory patterns. The readout speed can be increased up to the point where in every gamma cycle another pattern is displayed. With pulsed stimulation. bursts become relevant for coding, their occurrence can be used to discriminate relevant processes from background activity.


Assuntos
Potenciais de Ação/fisiologia , Córtex Cerebral/fisiologia , Memória/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Transmissão Sináptica/fisiologia , Animais , Relógios Biológicos/fisiologia , Sincronização Cortical , Humanos
2.
Neural Comput ; 13(8): 1721-47, 2001 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-11506668

RESUMO

We present a general approximation method for the mathematical analysis of spatially localized steady-state solutions in nonlinear neural field models. These models comprise several layers of excitatory and inhibitory cells. Coupling kernels between and inside layers are assumed to be gaussian shaped. In response to spatially localized (i.e., tuned) inputs, such networks typically reveal stationary localized activity profiles in the different layers. Qualitative properties of these solutions, like response amplitudes and tuning widths, are approximated for a whole class of nonlinear rate functions that obey a power law above some threshold and that are zero below. A special case of these functions is the semilinear function, which is commonly used in neural field models. The method is then applied to models for orientation tuning in cortical simple cells: first, to the one-layer model with "difference of gaussians" connectivity kernel developed by Carandini and Ringach (1997) as an abstraction of the biologically detailed simulations of Somers, Nelson, and Sur (1995); second, to a two-field model comprising excitatory and inhibitory cells in two separate layers. Under certain conditions, both models have the same steady states. Comparing simulations of the field models and results derived from the approximation method, we find that the approximation well predicts the tuning behavior of the full model. Moreover, explicit formulas for approximate amplitudes and tuning widths in response to changing input strength are given and checked numerically. Comparing the network behavior for different nonlinearities, we find that the only rate function (from the class of functions under study) that leads to constant tuning widths and a linear increase of firing rates in response to increasing input is the semilinear function. For other nonlinearities, the qualitative network response depends on whether the model neurons operate in a convex (e.g., x(2)) or concave (e.g., sqrt(x)) regime of their rate function. In the first case, tuning gradually changes from input driven at low input strength (broad tuning strongly depending on the input and roughly linear amplitudes in response to input strength) to recurrently driven at moderate input strength (sharp tuning, supralinear increase of amplitudes in response to input strength). For concave rate functions, the network reveals stable hysteresis between a state at low firing rates and a tuned state at high rates. This means that the network can "memorize" tuning properties of a previously shown stimulus. Sigmoid rate functions can combine both effects. In contrast to the Carandini-Ringach model, the two-field model further reveals oscillations with typical frequencies in the beta and gamma range, when the excitatory and inhibitory connections are relatively strong. This suggests a rhythmic modulation of tuning properties during cortical oscillations.


Assuntos
Córtex Cerebral/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Orientação/fisiologia , Memória , Tempo de Reação , Percepção Espacial
3.
Neural Comput ; 13(1): 139-59, 2001 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-11177431

RESUMO

Receptive fields (RF) in the visual cortex can change their size depending on the state of the individual. This reflects a changing visual resolution according to different demands on information processing during drowsiness. So far, however, the possible mechanisms that underlie these size changes have not been tested rigorously. Only qualitatively has it been suggested that state-dependent lateral geniculate nucleus (LGN) firing patterns (burst versus tonic firing) are mainly responsible for the observed cortical receptive field restructuring. Here, we employ a neural field approach to describe the changes of cortical RF properties analytically. Expressions to describe the spatiotemporal receptive fields are given for pure feedforward networks. The model predicts that visual latencies increase nonlinearly with the distance of the stimulus location from the RF center. RF restructuring effects are faithfully reproduced. Despite the changing RF sizes, the model demonstrates that the width of the spatial membrane potential profile (as measured by the variance sigma of a gaussian) remains constant in cortex. In contrast, it is shown for recurrent networks that both the RF width and the width of the membrane potential profile generically depend on time and can even increase if lateral cortical excitatory connections extend further than fibers from LGN to cortex. In order to differentiate between a feedforward and a recurrent mechanism causing the experimental RF changes, we fitted the data to the analytically derived point-spread functions. Results of the fits provide estimates for model parameters consistent with the literature data and support the hypothesis that the observed RF sharpening is indeed mainly driven by input from LGN, not by recurrent intracortical connections.


Assuntos
Modelos Neurológicos , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Eletroencefalografia , Potenciais da Membrana/fisiologia , Dinâmica não Linear , Tempo de Reação/fisiologia
4.
Network ; 11(1): 41-61, 2000 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-10735528

RESUMO

Synchronization of neural signals has been proposed as a temporal coding scheme representing cooperated computation in distributed cortical networks. Previous theoretical studies in that direction mainly focused on the synchronization of coupled oscillatory subsystems and neglected more complex dynamical modes, that already exist on the single-unit level. In this paper we study the parametrized time-discrete dynamics of two coupled recurrent networks of graded neurons. Conditions for the existence of partially synchronized dynamics of these systems are derived, referring to a situation where only subsets of neurons in each sub-network are synchronous. The coupled networks can have different architectures and even a different number of neurons. Periodic as well as quasiperiodic and chaotic attractors constrained to a manifold M of synchronized components are observed. Examples are discussed for coupled 3-neuron networks having different architectures, and for coupled 2-neuron and 3-neuron networks. Partial synchronization of different degrees is demonstrated by numerical results for selected sets of parameters. In conclusion, the results show that synchronization phenomena far beyond completely synchronized oscillations can occur even in simple coupled networks. The type of the synchronization depends in an intricate way on stimuli, history and connectivity as well as other parameters of the network. Specific inputs can further switch between different operational modes in a complex way, suggesting a similarly rich spatio-temporal behaviour in real neural systems.


Assuntos
Redes Neurais de Computação , Neurônios/fisiologia , Dinâmica não Linear
5.
J Physiol Paris ; 94(5-6): 473-88, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-11165914

RESUMO

The interplay between modelling and experimental studies can support the exploration of the function of neuronal circuits in the cortex. We exemplify such an approach with a study on the role of spike timing and gamma-oscillations in associative memory in strongly connected circuits of cortical neurones. It is demonstrated how associative memory studies on different levels of abstraction can specify the functionality to be expected in real cortical neuronal circuits. In our model overlapping random configurations of sparse cell populations correspond to memory items that are stored by simple Hebbian coincidence learning. This associative memory task will be implemented with biophysically well tested compartmental neurones developed by Pinsky and Rinzel . We ran simulation experiments to study memory recall in two network architectures: one interconnected pool of cells, and two reciprocally connected pools. When recalling a memory by stimulating a spatially overlapping set of cells, the completed pattern is coded by an event of synchronized single spikes occurring after 25-60 ms. These fast associations are performed even at a memory load corresponding to the memory capacity of optimally tuned formal associative networks (>0.1 bit/synapse). With tonic stimulation or feedback loops in the network the neurones fire periodically in the gamma-frequency range (20-80 Hz). With fast changing inputs memory recall can be switched between items within a single gamma cycle. Thus, oscillation is not a primary coding feature necessary for associative memory. However, it accompanies reverberatory feedback providing an improved iterative memory recall completed after a few gamma cycles (60-260 ms). In the bidirectional architecture reverberations do not express in a rigid phase locking between the pools. For small stimulation sets bursting occurred in these cells acting as a supportive mechanism for associative memory.


Assuntos
Córtex Cerebral/fisiologia , Memória/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais , Aprendizagem por Associação/fisiologia , Humanos , Rede Nervosa/fisiologia , Tempo de Reação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...