Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Robot AI ; 8: 703811, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35187091

RESUMO

Autonomous vehicles require precise and reliable self-localization to cope with dynamic environments. The field of visual place recognition (VPR) aims to solve this challenge by relying on the visual modality to recognize a place despite changes in the appearance of the perceived visual scene. In this paper, we propose to tackle the VPR problem following a neuro-cybernetic approach. To this end, the Log-Polar Max-Pi (LPMP) model is introduced. This bio-inspired neural network allows building a neural representation of the environment via an unsupervised one-shot learning. Inspired by the spatial cognition of mammals, visual information in the LPMP model are processed through two distinct pathways: a "what" pathway that extracts and learns the local visual signatures (landmarks) of a visual scene and a "where" pathway that computes their azimuth. These two pieces of information are then merged to build a visuospatial code that is characteristic of the place where the visual scene was perceived. Three main contributions are presented in this article: 1) the LPMP model is studied and compared with NetVLAD and CoHog, two state-of-the-art VPR models; 2) a test benchmark for the evaluation of VPR models according to the type of environment traveled is proposed based on the Oxford car dataset; and 3) the impact of the use of a novel detector leading to an uneven paving of an environment is evaluated in terms of the localization performance and compared to a regular paving. Our experiments show that the LPMP model can achieve comparable or better localization performance than NetVLAD and CoHog.

2.
Prog Neurobiol ; 199: 101920, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33053416

RESUMO

Experiences of animal and human beings are structured by the continuity of space and time coupled with the uni-directionality of time. In addition to its pivotal position in spatial processing and navigation, the hippocampal system also plays a central, multiform role in several types of temporal processing. These include timing and sequence learning, at scales ranging from meso-scales of seconds to macro-scales of minutes, hours, days and beyond, encompassing the classical functions of short term memory, working memory, long term memory, and episodic memories (comprised of information about when, what, and where). This review article highlights the principal findings and behavioral contexts of experiments in rats showing: 1) timing: tracking time during delays by hippocampal 'time cells' and during free behavior by hippocampal-afferent lateral entorhinal cortex ramping cells; 2) 'online' sequence processing: activity coding sequences of events during active behavior; 3) 'off-line' sequence replay: during quiescence or sleep, orderly reactivation of neuronal assemblies coding awake sequences. Studies in humans show neurophysiological correlates of episodic memory comparable to awake replay. Neural mechanisms are discussed, including ion channel properties, plateau and ramping potentials, oscillations of excitation and inhibition of population activity, bursts of high amplitude discharges (sharp wave ripples), as well as short and long term synaptic modifications among and within cell assemblies. Specifically conceived neural network models will suggest processes supporting the emergence of scalar properties (Weber's law), and include different classes of feedforward and recurrent network models, with intrinsic hippocampal coding for 'transitions' (sequencing of events or places).


Assuntos
Hipocampo , Neurônios , Animais , Aprendizagem , Ratos , Sono , Vigília
3.
Biol Cybern ; 114(2): 303-313, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32306125

RESUMO

Inspired by recent biological experiments, we simulate animals moving in different environments (open space, spiral mazes and on a treadmill) to test the performances of a simple model of the retrosplenial cortex (RSC) acting as a path integration (PI) and as a categorization mechanism. The connection between the hippocampus, RSC and the entorhinal cortex is revealed through a novel perspective. We suppose that the path integration is performed by the information coming from RSC. Grid cells in the entorhinal cortex then can be built as the result of a modulo projection of RSC activity. In our model, PI is performed by a 1D field of neurons acting as a simple low-pass filter of head direction (HD) cells modulated by the linear velocity of the animal. Our paper focuses on the constraints on the HD cells shape for a good approximation of PI. Recording of neurons on our 1D PI field shows these neurons would not be intuitively interpreted as performing PI. Using inputs coming from a narrow neighbouring projection of our PI field creates place cell-like activities in the RSC when the mouse runs on the treadmill. This can be the result of local self-organizing maps representing blobs of neurons in the RSC (e.g. cortical columns). Other simulations show that accessing the whole PI field would induce place cells whatever the environment is. Since this property is not observed, we conclude that the categorization neurons in the RSC should have access to only a small fraction of the PI field.


Assuntos
Simulação por Computador , Giro do Cíngulo/fisiologia , Percepção Espacial/fisiologia , Navegação Espacial/fisiologia , Algoritmos , Animais , Córtex Entorrinal/fisiologia , Hipocampo/fisiologia , Aprendizagem em Labirinto/fisiologia , Camundongos , Neurônios/fisiologia
4.
Bioinspir Biomim ; 15(2): 025003, 2020 02 14.
Artigo em Inglês | MEDLINE | ID: mdl-31639780

RESUMO

Starting from biological systems, we review the interest of active perception for object recognition in an autonomous system. Foveated vision and control of the eye saccade introduce strong benefits related to the differentiation of a 'what' pathway recognizing some local parts in the image and a 'where' pathway related to moving the fovea in that part of the image. Experiments on a dataset illustrate the capability of our model to deal with complex visual scenes. The results enlighten the interest of top-down contextual information to serialize the exploration and to perform some kind of hypothesis test. Moreover learning to control the occular saccade from the previous one can help reducing the exploration area and improve the recognition performances. Yet our results show that the selection of the next saccade should take into account broader statistical information. This opens new avenues for the control of the ocular saccades and the active exploration of complex visual scenes.


Assuntos
Visão Ocular/fisiologia , Humanos , Modelos Neurológicos , Redes Neurais de Computação , Movimentos Sacádicos , Percepção Visual
5.
Front Neurorobot ; 13: 5, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30899217

RESUMO

Representing objects in space is difficult because sensorimotor events are anchored in different reference frames, which can be either eye-, arm-, or target-centered. In the brain, Gain-Field (GF) neurons in the parietal cortex are involved in computing the necessary spatial transformations for aligning the tactile, visual and proprioceptive signals. In reaching tasks, these GF neurons exploit a mechanism based on multiplicative interaction for binding simultaneously touched events from the hand with visual and proprioception information.By doing so, they can infer new reference frames to represent dynamically the location of the body parts in the visual space (i.e., the body schema) and nearby targets (i.e., its peripersonal space). In this line, we propose a neural model based on GF neurons for integrating tactile events with arm postures and visual locations for constructing hand- and target-centered receptive fields in the visual space. In robotic experiments using an artificial skin, we show how our neural architecture reproduces the behaviors of parietal neurons (1) for encoding dynamically the body schema of our robotic arm without any visual tags on it and (2) for estimating the relative orientation and distance of targets to it. We demonstrate how tactile information facilitates the integration of visual and proprioceptive signals in order to construct the body space.

6.
J Exp Biol ; 222(Pt Suppl 1)2019 02 06.
Artigo em Inglês | MEDLINE | ID: mdl-30728231

RESUMO

Place recognition is a complex process involving idiothetic and allothetic information. In mammals, evidence suggests that visual information stemming from the temporal and parietal cortical areas ('what' and 'where' information) is merged at the level of the entorhinal cortex (EC) to build a compact code of a place. Local views extracted from specific feature points can provide information important for view cells (in primates) and place cells (in rodents) even when the environment changes dramatically. Robotics experiments using conjunctive cells merging 'what' and 'where' information related to different local views show their important role for obtaining place cells with strong generalization capabilities. This convergence of information may also explain the formation of grid cells in the medial EC if we suppose that: (1) path integration information is computed outside the EC, (2) this information is compressed at the level of the EC owing to projection (which follows a modulo principle) of cortical activities associated with discretized vector fields representing angles and/or path integration, and (3) conjunctive cells merge the projections of different modalities to build grid cell activities. Applying modulo projection to visual information allows an interesting compression of information and could explain more recent results on grid cells related to visual exploration. In conclusion, the EC could be dedicated to the build-up of a robust yet compact code of cortical activity whereas the hippocampus proper recognizes these complex codes and learns to predict the transition from one state to another.


Assuntos
Córtex Entorrinal/fisiologia , Primatas/fisiologia , Robótica , Roedores/fisiologia , Animais , Modelos Neurológicos
7.
PLoS One ; 12(9): e0184960, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28934291

RESUMO

Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.


Assuntos
Atenção/fisiologia , Controle Comportamental/psicologia , Cognição/fisiologia , Emoções/fisiologia , Robótica , Percepção Visual/fisiologia , Discriminação Psicológica , Humanos , Reconhecimento Visual de Modelos , Autoavaliação (Psicologia)
8.
PLoS One ; 12(3): e0173684, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28282439

RESUMO

The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.


Assuntos
Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Rede Nervosa , Algoritmos , Braço , Membros Artificiais , Encéfalo/fisiologia , Humanos , Redes Neurais de Computação , Reforço Psicológico , Robótica , Processos Estocásticos
9.
Sci Rep ; 7: 41056, 2017 01 20.
Artigo em Inglês | MEDLINE | ID: mdl-28106139

RESUMO

Perceptual illusions across multiple modalities, such as the rubber-hand illusion, show how dynamic the brain is at adapting its body image and at determining what is part of it (the self) and what is not (others). Several research studies showed that redundancy and contingency among sensory signals are essential for perception of the illusion and that a lag of 200-300 ms is the critical limit of the brain to represent one's own body. In an experimental setup with an artificial skin, we replicate the visuo-tactile illusion within artificial neural networks. Our model is composed of an associative map and a recurrent map of spiking neurons that learn to predict the contingent activity across the visuo-tactile signals. Depending on the temporal delay incidentally added between the visuo-tactile signals or the spatial distance of two distinct stimuli, the two maps detect contingency differently. Spiking neurons organized into complex networks and synchrony detection at different temporal interval can well explain multisensory integration regarding self-body.


Assuntos
Ilusões , Modelos Neurológicos , Redes Neurais de Computação , Plasticidade Neuronal , Fenômenos Fisiológicos da Pele , Potenciais de Ação , Imagem Corporal , Humanos , Neurônios/fisiologia , Lobo Parietal/fisiologia
10.
Sci Rep ; 6: 19908, 2016 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-26844862

RESUMO

Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture--specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot's motor internal state, (iii) posture recognition, and (iv) novelty detection--is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.


Assuntos
Comportamento Imitativo/fisiologia , Aprendizagem , Robótica , Adulto , Transtorno do Espectro Autista/fisiopatologia , Criança , Cognição/fisiologia , Feminino , Humanos , Masculino , Redes Neurais de Computação , Postura , Adulto Jovem
11.
Front Neurorobot ; 9: 1, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25904862

RESUMO

In the present study, a new architecture for the generation of grid cells (GC) was implemented on a real robot. In order to test this model a simple place cell (PC) model merging visual PC activity and GC was developed. GC were first built from a simple "several to one" projection (similar to a modulo operation) performed on a neural field coding for path integration (PI). Robotics experiments raised several practical and theoretical issues. To limit the important angular drift of PI, head direction information was introduced in addition to the robot proprioceptive signal coming from the wheel rotation. Next, a simple associative learning between visual place cells and the neural field coding for the PI has been used to recalibrate the PI and to limit its drift. Finally, the parameters controlling the shape of the PC built from the GC have been studied. Increasing the number of GC obviously improves the shape of the resulting place field. Yet, other parameters such as the discretization factor of PI or the lateral interactions between GC can have an important impact on the place field quality and avoid the need of a very large number of GC. In conclusion, our results show our GC model based on the compression of PI is congruent with neurobiological studies made on rodent. GC firing patterns can be the result of a modulo transformation of PI information. We argue that such a transformation may be a general property of the connectivity from the cortex to the entorhinal cortex. Our model predicts that the effect of similar transformations on other kinds of sensory information (visual, tactile, auditory, etc…) in the entorhinal cortex should be observed. Consequently, a given EC cell should react to non-contiguous input configurations in non-spatial conditions according to the projection from its different inputs.

12.
Biol Cybern ; 109(2): 255-74, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25576394

RESUMO

Imitation and learning from humans require an adequate sensorimotor controller to learn and encode behaviors. We present the Dynamic Muscle Perception-Action(DM-PerAc) model to control a multiple degrees-of-freedom (DOF) robot arm. In the original PerAc model, path-following or place-reaching behaviors correspond to the sensorimotor attractors resulting from the dynamics of learned sensorimotor associations. The DM-PerAc model, inspired by human muscles, permits one to combine impedance-like control with the capability of learning sensorimotor attraction basins. We detail a solution to learn incrementally online the DM-PerAc visuomotor controller. Postural attractors are learned by adapting the muscle activations in the model depending on movement errors. Visuomotor categories merging visual and proprioceptive signals are associated with these muscle activations. Thus, the visual and proprioceptive signals activate the motor action generating an attractor which satisfies both visual and proprioceptive constraints. This visuomotor controller can serve as a basis for imitative behaviors. In addition, the muscle activation patterns can define directions of movement instead of postural attractors. Such patterns can be used in state-action couples to generate trajectories like in the PerAc model. We discuss a possible extension of the DM-PerAc controller by adapting the Fukuyori's controller based on the Langevin's equation. This controller can serve not only to reach attractors which were not explicitly learned, but also to learn the state/action couples to define trajectories.


Assuntos
Retroalimentação Sensorial/fisiologia , Modelos Neurológicos , Sistemas On-Line , Desempenho Psicomotor/fisiologia , Robótica , Algoritmos , Simulação por Computador , Impedância Elétrica , Humanos , Músculo Esquelético/fisiologia , Propriocepção/fisiologia , Percepção Visual/fisiologia
13.
Neural Netw ; 62: 102-11, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25240580

RESUMO

The so-called self-other correspondence problem in imitation demands to find the transformation that maps the motor dynamics of one partner to our own. This requires a general purpose sensorimotor mechanism that transforms an external fixation-point (partner's shoulder) reference frame to one's own body-centered reference frame. We propose that the mechanism of gain-modulation observed in parietal neurons may generally serve these types of transformations by binding the sensory signals across the modalities with radial basis functions (tensor products) on the one hand and by permitting the learning of contextual reference frames on the other hand. In a shoulder-elbow robotic experiment, gain-field neurons (GF) intertwine the visuo-motor variables so that their amplitude depends on them all. In situations of modification of the body-centered reference frame, the error detected in the visuo-motor mapping can serve then to learn the transformation between the robot's current sensorimotor space and the new one. These situations occur for instance when we turn the head on its axis (visual transformation), when we use a tool (body modification), or when we interact with a partner (embodied simulation). Our results defend the idea that the biologically-inspired mechanism of gain modulation found in parietal neurons can serve as a basic structure for achieving nonlinear mapping in spatial tasks as well as in cooperative and social functions.


Assuntos
Córtex Motor/fisiologia , Neurônios Motores/fisiologia , Lobo Parietal/fisiologia , Algoritmos , Simulação por Computador , Cotovelo/inervação , Cotovelo/fisiologia , Humanos , Imaginação/fisiologia , Aprendizagem/fisiologia , Modelos Neurológicos , Desempenho Psicomotor/fisiologia , Robótica , Ombro/inervação , Ombro/fisiologia , Percepção Social , Percepção Espacial/fisiologia
14.
Front Neurorobot ; 7: 16, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24115931

RESUMO

Autonomy and self-improvement capabilities are still challenging in the fields of robotics and machine learning. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a repertoire of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behavior's quality, from a given fitness system in order to take correct decisions. In this work, we focus on how a second-order controller can be used to (1) manage behaviors according to the situation and (2) seek for human interactions to improve skills. Following an incremental and constructivist approach, we present a generic neural architecture, based on an on-line novelty detection algorithm that may be able to self-evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the previous perception. Prediction error, coming from surprising events, provides a measure of the quality of the underlying sensory-motor contingencies. We show how a simple second-order controller (emotional system) based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and also succeeds in asking for help if it detects dead-lock situations. We propose that this model could be a key structure toward self-assessment and autonomy. We made several experiments that can account for such properties for two different strategies (road following and place cells based navigation) in different situations.

15.
Front Psychol ; 4: 771, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24155736

RESUMO

During development, infants learn to differentiate their motor behaviors relative to various contexts by exploring and identifying the correct structures of causes and effects that they can perform; these structures of actions are called task sets or internal models. The ability to detect the structure of new actions, to learn them and to select on the fly the proper one given the current task set is one great leap in infants cognition. This behavior is an important component of the child's ability of learning-to-learn, a mechanism akin to the one of intrinsic motivation that is argued to drive cognitive development. Accordingly, we propose to model a dual system based on (1) the learning of new task sets and on (2) their evaluation relative to their uncertainty and prediction error. The architecture is designed as a two-level-based neural system for context-dependent behavior (the first system) and task exploration and exploitation (the second system). In our model, the task sets are learned separately by reinforcement learning in the first network after their evaluation and selection in the second one. We perform two different experimental setups to show the sensorimotor mapping and switching between tasks, a first one in a neural simulation for modeling cognitive tasks and a second one with an arm-robot for motor task learning and switching. We show that the interplay of several intrinsic mechanisms drive the rapid formation of the neural populations with respect to novel task sets.

16.
PLoS One ; 8(7): e69474, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23922718

RESUMO

The question whether newborns possess inborn social skills is a long debate in developmental psychology. Fetal behavioral and anatomical observations show evidences for the control of eye movements and facial behaviors during the third trimester of pregnancy whereas specific sub-cortical areas, like the superior colliculus (SC) and the striatum appear to be functionally mature to support these behaviors. These observations suggest that the newborn is potentially mature for developing minimal social skills. In this manuscript, we propose that the mechanism of sensory alignment observed in SC is particularly important for enabling the social skills observed at birth such as facial preference and facial mimicry. In a computational simulation of the maturing superior colliculus connected to a simulated facial tissue of a fetus, we model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered) and the deep somatopic layer (face-centered) in SC are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to have a sensitivity to configuration of eyes and mouth. We show that the visual and tactile maps align through a Hebbian learning stage and and strengthen their synaptic links from each other into the intermediate layer. It results that the global network produces some emergent properties such as sensitivity toward the spatial configuration of face-like patterns and the detection of eyes and mouth movement.


Assuntos
Modelos Neurológicos , Colículos Superiores/fisiologia , Percepção Visual/fisiologia , Algoritmos , Face/anatomia & histologia , Face/embriologia , Feminino , Feto/anatomia & histologia , Feto/fisiologia , Humanos , Recém-Nascido , Rede Nervosa/fisiologia , Plasticidade Neuronal/fisiologia , Estimulação Luminosa , Gravidez , Tato/fisiologia
17.
J Integr Neurosci ; 6(3): 367-78, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-17933017

RESUMO

Place cells are hippocampal pyramidal neurons that discharge strongly in relation to the rat's location in the environment. We recently reported that many place cells recorded from rats performing place or cue navigation tasks also discharged when they were at the goal location rather than in the primary firing field. Furthermore, subtle differences in discharge timing were found in the two navigation tasks, with activity occurring later in the place task compared to the cue task. Here we tested the possibility that such delayed firing in the place task may reflect the differential involvement of time estimation, which would allow the rat to predict forthcoming reward delivery. More specifically, we reasoned that failure to obtain a reward after a fixed 2s delay in the place task reliably reflected the rat's misplacement relative to the correct location, thus making time a valuable cue to help the rat perform the task. To test this hypothesis, well-trained rats were run on a partial extinction procedure in place and cue navigation tasks so that no feed-back signal was provided about their actual accuracy during extinction periods. Although the time estimation hypothesis predicts that only in the place task will the rat make correction movements at the end of goal periods during extinction, we found that such movements occurred in all rats, indicating correct time estimation in both place and cue tasks. We briefly discuss the results in the light of current computational theories of hippocampal function.


Assuntos
Potenciais de Ação/fisiologia , Objetivos , Neurônios/fisiologia , Percepção do Tempo/fisiologia , Animais , Comportamento Animal , Mapeamento Encefálico , Condicionamento Operante/fisiologia , Sinais (Psicologia) , Comportamento Exploratório/fisiologia , Extinção Psicológica/fisiologia , Privação de Alimentos/fisiologia , Hipocampo/citologia , Ratos
18.
Front Neurorobot ; 1: 3, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-18958274

RESUMO

After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type "transition cells" that encompasses traditional "place cells".

19.
Behav Brain Sci ; 24(6): 1051-1053, 2001 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18241362

RESUMO

As models of living beings acting in a real world biorobots undergo an accelerated "philogenic" complexification. The first efficient robots performed simple animal behaviours (e.g., those of ants, crickets) and later on isolated elementary behaviours of complex beings. The increasing complexity of the tasks robots are dedicated to is matched by an increasing complexity and versatility of the architectures now supporting conditioning or even elementary planning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...