Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Top Cogn Sci ; 9(2): 343-373, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28176449

RESUMO

In line with Allen Newell's challenge to develop complete cognitive architectures, and motivated by a recent proposal for a unifying subsymbolic computational theory of cognition, we introduce the cognitive control architecture SEMLINCS. SEMLINCS models the development of an embodied cognitive agent that learns discrete production rule-like structures from its own, autonomously gathered, continuous sensorimotor experiences. Moreover, the agent uses the developing knowledge to plan and control environmental interactions in a versatile, goal-directed, and self-motivated manner. Thus, in contrast to several well-known symbolic cognitive architectures, SEMLINCS is not provided with production rules and the involved symbols, but it learns them. In this paper, the actual implementation of SEMLINCS causes learning and self-motivated, autonomous behavioral control of the game figure Mario in a clone of the computer game Super Mario Bros. Our evaluations highlight the successful development of behavioral versatility as well as the learning of suitable production rules and the involved symbols from sensorimotor experiences. Moreover, knowledge- and motivation-dependent individualizations of the agents' behavioral tendencies are shown. Finally, interaction sequences can be planned on the sensorimotor-grounded production rule level. Current limitations directly point toward the need for several further enhancements, which may be integrated into SEMLINCS in the near future. Overall, SEMLINCS may be viewed as an architecture that allows the functional and computational modeling of embodied cognitive development, whereby the current main focus lies on the development of production rules from sensorimotor experiences.


Assuntos
Cognição , Aprendizagem , Humanos , Motivação
2.
Front Comput Neurosci ; 7: 148, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24191151

RESUMO

This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control.

3.
Biol Cybern ; 107(1): 61-82, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23090574

RESUMO

Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.


Assuntos
Modelos Teóricos , Movimento , Humanos , Probabilidade
4.
Cogn Process ; 13 Suppl 1: S113-6, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22806661

RESUMO

The brain often integrates multisensory sources of information in a way that is close to the optimal according to Bayesian principles. Since sensory modalities are grounded in different, body-relative frames of reference, multisensory integration requires accurate transformations of information. We have shown experimentally, for example, that a rotating tactile stimulus on the palm of the right hand can influence the judgment of ambiguously rotating visual displays. Most significantly, this influence depended on the palm orientation: when facing upwards, a clockwise rotation on the palm yielded a clockwise visual judgment bias; when facing downwards, the same clockwise rotation yielded a counterclockwise bias. Thus, tactile rotation cues biased visual rotation judgment in a head-centered reference frame. Recently, we have generated a modular, multimodal arm model that is able to mimic aspects of such experiments. The model co-represents the state of an arm in several modalities, including a proprioceptive, joint angle modality as well as head-centered orientation and location modalities. Each modality represents each limb or joint separately. Sensory information from the different modalities is exchanged via local forward and inverse kinematic mappings. Also, re-afferent sensory feedback is anticipated and integrated via Kalman filtering. Information across modalities is integrated probabilistically via Bayesian-based plausibility estimates, continuously maintaining a consistent global arm state estimation. This architecture is thus able to model the described effect of posture-dependent motion cue integration: tactile and proprioceptive sensory information may yield top-down biases on visual processing. Equally, such information may influence top-down visual attention, expecting particular arm-dependent motion patterns. Current research implements such effects on visual processing and attention.


Assuntos
Julgamento/fisiologia , Modelos Biológicos , Percepção de Movimento/fisiologia , Propriocepção , Tato/fisiologia , Atenção/fisiologia , Humanos , Metacarpo/inervação , Orientação , Estimulação Luminosa , Postura , Probabilidade , Rotação , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...