Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Haptics ; 5(3): 196-207, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-26964106

RESUMO

In the Turing test a computer model is deemed to "think intelligently" if it can generate answers that are indistinguishable from those of a human. We developed an analogous Turing-like handshake test to determine if a machine can produce similarly indistinguishable movements. The test is administered through a telerobotic system in which an interrogator holds a robotic stylus and interacts with another party - artificial or human with varying levels of noise. The interrogator is asked which party seems to be more human. Here, we compare the human-likeness levels of three different models for handshake: (1) Tit-for-Tat model, (2) λ model, and (3) Machine Learning model. The Tit-for-Tat and the Machine Learning models generated handshakes that were perceived as the most human-like among the three models that were tested. Combining the best aspects of each of the three models into a single robotic handshake algorithm might allow us to advance our understanding of the way the nervous system controls sensorimotor interactions and further improve the human-likeness of robotic handshakes.

2.
Neural Comput ; 13(12): 2823-49, 2001 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-11705412

RESUMO

Neurons in mammalian cerebral cortex combine specific responses with respect to some stimulus features with invariant responses to other stimulus features. For example, in primary visual cortex, complex cells code for orientation of a contour but ignore its position to a certain degree. In higher areas, such as the inferotemporal cortex, translation-invariant, rotation-invariant, and even view point-invariant responses can be observed. Such properties are of obvious interest to artificial systems performing tasks like pattern recognition. It remains to be resolved how such response properties develop in biological systems. Here we present an unsupervised learning rule that addresses this problem. It is based on a neuron model with two sites of synaptic integration, allowing qualitatively different effects of input to basal and apical dendritic trees, respectively. Without supervision, the system learns to extract invariance properties using temporal or spatial continuity of stimuli. Furthermore, top-down information can be smoothly integrated in the same framework. Thus, this model lends a physiological implementation to approaches of unsupervised learning of invariant-response properties.


Assuntos
Aprendizagem/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Córtex Visual/fisiologia , Algoritmos , Animais , Sinalização do Cálcio/fisiologia , Dendritos/fisiologia , Dendritos/ultraestrutura , Mamíferos/fisiologia , Neocórtex/ultraestrutura , Células Piramidais/fisiologia , Células Piramidais/ultraestrutura , Sinapses/fisiologia , Lobo Temporal/fisiologia , Córtex Visual/citologia
3.
J Neurosci Methods ; 110(1-2): 103-11, 2001 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-11564530

RESUMO

The objective of visual systems neuroscience has shifted over the past few years from determining the receptive fields of cells towards the understanding of higher level cognition in awake animals viewing natural stimuli. In experiments with awake animals it is important to control the relevant aspects of behavior. Most important for vision science is the control of the direction of gaze. Here we present Dual Purkinje eye-tracking on cats, which--as a non-contact method--brings a number of advantages. Along with the presented methods for calibration and for synchronization to off-the-shelf video presentation hardware, this method allows high precision experiments to be performed on cats freely viewing videos of natural scenes.


Assuntos
Gatos/fisiologia , Processamento Eletrônico de Dados/métodos , Movimentos Oculares/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Neurofisiologia/métodos , Animais , Eletrodos Implantados , Neurofisiologia/instrumentação , Estimulação Luminosa , Gravação em Vídeo/métodos
4.
J Comput Neurosci ; 11(3): 207-15, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-11796938

RESUMO

Many learning rules for neural networks derive from abstract objective functions. The weights in those networks are typically optimized utilizing gradient ascent on the objective function. In those networks each neuron needs to store two variables. One variable, called activity, contains the bottom-up sensory-fugal information involved in the core signal processing. The other variable typically describes the derivative of the objective function with respect to the cell's activity and is exclusively used for learning. This variable allows the objective function's derivative to be calculated with respect to each weight and thus the weight update. Although this approach is widely used, the mapping of such two variables onto physiology is unclear, and these learning algorithms are often considered biologically unrealistic. However, recent research on the properties of cortical pyramidal neurons shows that these cells have at least two sites of synaptic integration, the basal and the apical dendrite, and are thus appropriately described by at least two variables. Here we discuss whether these results could constitute a physiological basis for the described abstract learning rules. As examples we demonstrate an implementation of the backpropagation of error algorithm and a specific self-supervised learning algorithm using these principles. Thus, compared to standard, one-integration-site neurons, it is possible to incorporate interesting properties in neural networks that are inspired by physiology with a modest increase of complexity.


Assuntos
Córtex Cerebral/fisiologia , Dendritos/fisiologia , Aprendizagem/fisiologia , Rede Nervosa/fisiologia , Células Piramidais/fisiologia , Sinapses/fisiologia , Transmissão Sináptica/fisiologia , Potenciais de Ação/fisiologia , Algoritmos , Animais , Sinalização do Cálcio/fisiologia , Humanos , Interneurônios/fisiologia , Modelos Neurológicos , Inibição Neural/fisiologia , Redes Neurais de Computação , Vias Neurais/fisiologia , Plasticidade Neuronal/fisiologia , Processos Estocásticos
5.
Neural Netw ; 13(1): 1-9, 2000 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-10935452

RESUMO

The interest in neuronal networks originates for a good part in the option not to construct, but to train them. The mechanisms governing synaptic modifications during such training are assumed to depend on signals locally available at the synapses. In contrast, the performance of a network is suitably measured on a global scale. Here we propose a learning rule that addresses this conflict. It is inspired by recent physiological experiments and exploits the interaction of inhibitory input and backpropagating action potentials in pyramidal neurons. This mechanism makes information on the global scale available as a local signal. As a result, several desirable features can be combined: the learning rule allows fast synaptic modifications approaching one-shot learning. Nevertheless, it leads to stable representations during ongoing learning. Furthermore, the response properties of the neurons are not globally correlated, but cover the whole stimulus space.


Assuntos
Aprendizagem/fisiologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Células Piramidais/fisiologia , Sinapses/fisiologia , Animais , Humanos
6.
J Comput Neurosci ; 8(2): 161-73, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-10798600

RESUMO

The classical view of cortical information processing is that of a bottom-up process in a feedforward hierarchy. However, psychophysical, anatomical, and physiological evidence suggests that top-down effects play a crucial role in the processing of input stimuli. Not much is known about the neural mechanisms underlying these effects. Here we investigate a physiologically inspired model of two reciprocally connected cortical areas. Each area receives bottom-up as well as top-down information. This information is integrated by a mechanism that exploits recent findings on somato-dendritic interactions. (1) This results in a burst signal that is robust in the context of noise in bottom-up signals. (2) Investigating the influence of additional top-down information, priming-like effects on the processing of bottom-up input can be demonstrated. (3) In accordance with recent physiological findings, interareal coupling in low-frequency ranges is characteristically enhanced by top-down mechanisms. The proposed scheme combines a qualitative influence of top-down directed signals on the temporal dynamics of neuronal activity with a limited effect on the mean firing rate of the targeted neurons. As it gives an account of the system properties on the cellular level, it is possible to derive several experimentally testable predictions.


Assuntos
Córtex Cerebral/fisiologia , Dendritos/fisiologia , Rede Nervosa/fisiologia , Sensação/fisiologia , Potenciais de Ação/fisiologia , Animais , Córtex Cerebral/citologia , Dendritos/ultraestrutura , Humanos , Teoria da Informação , Modelos Neurológicos , Transdução de Sinais/fisiologia , Transmissão Sináptica/fisiologia , Fatores de Tempo
7.
Network ; 11(1): 25-39, 2000 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-10735527

RESUMO

Since the classical work of D O Hebb 1949 The Organization of Behaviour (New York: Wiley) it is assumed that synaptic plasticity solely depends on the activity of the pre- and the postsynaptic cells. Synapses influence the plasticity of other synapses exclusively via the post-synaptic activity. This confounds effects on synaptic plasticity and neuronal activation and, thus, makes it difficult to implement networks which optimize global measures of performance. Exploring solutions to this problem, inspired by recent research on the properties of apical dendrites, we examine a network of neurons with two sites of synaptic integration. These communicate in such a way that one set of synapses mainly influences the neurons' activity; the other set gates synaptic plasticity. Analysing the system with a constant set of parameters reveals: (1) the afferents that gate plasticity act as supervisors, individual to every cell. (2) While the neurons acquire specific receptive fields the net activity remains constant for different stimuli. This ensures that all stimuli are represented and, thus, contributes to information maximization. (3) Mechanisms for maximization of coherent information can easily be implemented. Neurons with non-overlapping receptive fields learn to fire correlated and preferentially transmit information that is correlated over space. (4) We demonstrate how a new measure of performance can be implemented: cells learn to represent only the part of the input that is relevant to the processing at higher stages. This criterion is termed 'relevant infomax'.


Assuntos
Aprendizagem/fisiologia , Redes Neurais de Computação , Células Piramidais/fisiologia , Sinapses/fisiologia , Comunicação Celular/fisiologia , Córtex Cerebral/citologia , Córtex Cerebral/fisiologia
8.
J Physiol Paris ; 94(5-6): 539-48, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-11165918

RESUMO

For biological realism, models of learning in neuronal networks often assume that synaptic plasticity solely depends on locally available signals, in particular only on the activity of the pre- and post-synaptic cells. As a consequence, synapses influence the plasticity of other synapses exclusively via the post-synaptic activity. Inspired by recent research on the properties of apical dendrites it has been suggested, that a second integration site in the apical dendrite may mediate specific global information. Here we explore this issue considering the example of learning invariant responses by examining a network of spiking neurones with two sites of synaptic integration. We demonstrate that results obtained in networks of units with continuous outputs transfer to the more realistic neuronal model. This allows a number of more specific experimental predictions, and is a necessary step to unified description of learning rules exploiting timing of action potentials.


Assuntos
Encéfalo/fisiologia , Aprendizagem/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Animais , Rede Nervosa/fisiologia , Plasticidade Neuronal/fisiologia , Tempo de Reação , Sinapses/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA