Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters










Publication year range
1.
Front Comput Neurosci ; 18: 1360009, 2024.
Article in English | MEDLINE | ID: mdl-38468870

ABSTRACT

Mean-field models have been developed to replicate key features of epileptic seizure dynamics. However, the precise mechanisms and the role of the brain area responsible for seizure onset and propagation remain incompletely understood. In this study, we employ computational methods within The Virtual Brain framework and the Epileptor model to explore how the location and connectivity of an Epileptogenic Zone (EZ) in a mouse brain are related to focal seizures (seizures that start in one brain area and may or may not remain localized), with a specific focus on the hippocampal region known for its association with epileptic seizures. We then devise computational strategies to confine seizures (prevent widespread propagation), simulating medical-like treatments such as tissue resection and the application of an anti-seizure drugs or neurostimulation to suppress hyperexcitability. Through selectively removing (blocking) specific connections informed by the structural connectome and graph network measurements or by locally reducing outgoing connection weights of EZ areas, we demonstrate that seizures can be kept constrained around the EZ region. We successfully identified the minimal connections necessary to prevent widespread seizures, with a particular focus on minimizing surgical or medical intervention while simultaneously preserving the original structural connectivity and maximizing brain functionality.

2.
Sci Rep ; 13(1): 6949, 2023 04 28.
Article in English | MEDLINE | ID: mdl-37117236

ABSTRACT

Brain circuits display modular architecture at different scales of organization. Such neural assemblies are typically associated to functional specialization but the mechanisms leading to their emergence and consolidation still remain elusive. In this paper we investigate the role of inhibition in structuring new neural assemblies driven by the entrainment to various inputs. In particular, we focus on the role of partially synchronized dynamics for the creation and maintenance of structural modules in neural circuits by considering a network of excitatory and inhibitory [Formula: see text]-neurons with plastic Hebbian synapses. The learning process consists of an entrainment to temporally alternating stimuli that are applied to separate regions of the network. This entrainment leads to the emergence of modular structures. Contrary to common practice in artificial neural networks-where the acquired weights are typically frozen after the learning session-we allow for synaptic adaptation even after the learning phase. We find that the presence of inhibitory neurons in the network is crucial for the emergence and the post-learning consolidation of the modular structures. Indeed networks made of purely excitatory neurons or of neurons not respecting Dale's principle are unable to form or to maintain the modular architecture induced by the stimuli. We also demonstrate that the number of inhibitory neurons in the network is directly related to the maximal number of neural assemblies that can be consolidated, supporting the idea that inhibition has a direct impact on the memory capacity of the neural network.


Subject(s)
Learning , Neurons , Neurons/physiology , Learning/physiology , Neural Networks, Computer , Synapses/physiology , Acclimatization , Models, Neurological
3.
Proc Natl Acad Sci U S A ; 119(33): e2115335119, 2022 08 16.
Article in English | MEDLINE | ID: mdl-35947616

ABSTRACT

We propose that coding and decoding in the brain are achieved through digital computation using three principles: relative ordinal coding of inputs, random connections between neurons, and belief voting. Due to randomization and despite the coarseness of the relative codes, we show that these principles are sufficient for coding and decoding sequences with error-free reconstruction. In particular, the number of neurons needed grows linearly with the size of the input repertoire growing exponentially. We illustrate our model by reconstructing sequences with repertoires on the order of a billion items. From this, we derive the Shannon equations for the capacity limit to learn and transfer information in the neural population, which is then generalized to any type of neural network. Following the maximum entropy principle of efficient coding, we show that random connections serve to decorrelate redundant information in incoming signals, creating more compact codes for neurons and therefore, conveying a larger amount of information. Henceforth, despite the unreliability of the relative codes, few neurons become necessary to discriminate the original signal without error. Finally, we discuss the significance of this digital computation model regarding neurobiological findings in the brain and more generally with artificial intelligence algorithms, with a view toward a neural information theory and the design of digital neural networks.


Subject(s)
Artificial Intelligence , Brain , Models, Neurological , Algorithms , Brain/physiology , Neural Networks, Computer , Neurons/physiology
4.
Front Neurorobot ; 16: 845955, 2022.
Article in English | MEDLINE | ID: mdl-35686118

ABSTRACT

Recurrent neural networks (RNNs) have been proved very successful at modeling sequential data such as language or motions. However, these successes rely on the use of the backpropagation through time (BPTT) algorithm, batch training, and the hypothesis that all the training data are available at the same time. In contrast, the field of developmental robotics aims at uncovering lifelong learning mechanisms that could allow embodied machines to learn and stabilize knowledge in continuously evolving environments. In this article, we investigate different RNN designs and learning methods, that we evaluate in a continual learning setting. The generative modeling task consists in learning to generate 20 continuous trajectories that are presented sequentially to the learning algorithms. Each method is evaluated according to the average prediction error over the 20 trajectories obtained after complete training. This study focuses on learning algorithms with low memory requirements, that do not need to store past information to update their parameters. Our experiments identify two approaches especially fit for this task: conceptors and predictive coding. We suggest combining these two mechanisms into a new proposed model that we label PC-Conceptors that outperforms the other methods presented in this study.

5.
Neural Netw ; 143: 638-656, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34343777

ABSTRACT

In this work, we build upon the Active Inference (AIF) and Predictive Coding (PC) frameworks to propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories. We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories. We furthermore inquire the effects of bidirectional interactions between the motor and the visual modules. The architecture is tested on the control of a simulated robotic arm learning to reproduce handwritten letters.


Subject(s)
Learning
6.
PLoS Comput Biol ; 17(2): e1008566, 2021 02.
Article in English | MEDLINE | ID: mdl-33600482

ABSTRACT

We propose a developmental model inspired by the cortico-basal system (CX-BG) for vocal learning in babies and for solving the correspondence mismatch problem they face when they hear unfamiliar voices, with different tones and pitches. This model is based on the neural architecture INFERNO standing for Iterative Free-Energy Optimization of Recurrent Neural Networks. Free-energy minimization is used for rapidly exploring, selecting and learning the optimal choices of actions to perform (eg sound production) in order to reproduce and control as accurately as possible the spike trains representing desired perceptions (eg sound categories). We detail in this paper the CX-BG system responsible for linking causally the sound and motor primitives at the order of a few milliseconds. Two experiments performed with a small and a large audio database show the capabilities of exploration, generalization and robustness to noise of our neural architecture in retrieving audio primitives during vocal learning and during acoustic matching with unheared voices (different genders and tones).


Subject(s)
Brain/physiology , Learning/physiology , Models, Neurological , Verbal Behavior/physiology , Algorithms , Auditory Cortex/physiology , Auditory Perception/physiology , Basal Ganglia/physiology , Child Development/physiology , Computational Biology , Female , Humans , Infant , Language Development , Male , Models, Psychological , Nerve Net/physiology , Neural Networks, Computer , Unsupervised Machine Learning
7.
Prog Neurobiol ; 199: 101920, 2021 04.
Article in English | MEDLINE | ID: mdl-33053416

ABSTRACT

Experiences of animal and human beings are structured by the continuity of space and time coupled with the uni-directionality of time. In addition to its pivotal position in spatial processing and navigation, the hippocampal system also plays a central, multiform role in several types of temporal processing. These include timing and sequence learning, at scales ranging from meso-scales of seconds to macro-scales of minutes, hours, days and beyond, encompassing the classical functions of short term memory, working memory, long term memory, and episodic memories (comprised of information about when, what, and where). This review article highlights the principal findings and behavioral contexts of experiments in rats showing: 1) timing: tracking time during delays by hippocampal 'time cells' and during free behavior by hippocampal-afferent lateral entorhinal cortex ramping cells; 2) 'online' sequence processing: activity coding sequences of events during active behavior; 3) 'off-line' sequence replay: during quiescence or sleep, orderly reactivation of neuronal assemblies coding awake sequences. Studies in humans show neurophysiological correlates of episodic memory comparable to awake replay. Neural mechanisms are discussed, including ion channel properties, plateau and ramping potentials, oscillations of excitation and inhibition of population activity, bursts of high amplitude discharges (sharp wave ripples), as well as short and long term synaptic modifications among and within cell assemblies. Specifically conceived neural network models will suggest processes supporting the emergence of scalar properties (Weber's law), and include different classes of feedforward and recurrent network models, with intrinsic hippocampal coding for 'transitions' (sequencing of events or places).


Subject(s)
Hippocampus , Neurons , Animals , Learning , Rats , Sleep , Wakefulness
8.
Neural Netw ; 121: 242-258, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31581065

ABSTRACT

We present a framework based on iterative free-energy optimization with spiking neural networks for modeling the fronto-striatal system (PFC-BG) for the generation and recall of audio memory sequences. In line with neuroimaging studies carried out in the PFC, we propose a genuine coding strategy using the gain-modulation mechanism to represent abstract sequences based solely on the rank and location of items within them. Based on this mechanism, we show that we can construct a repertoire of neurons sensitive to the temporal structure in sequences from which we can represent any novel sequences. Free-energy optimization is then used to explore and to retrieve the missing indices of the items in the correct order for executive control and compositionality. We show that the gain-modulation mechanism permits the network to be robust to variabilities and to have long-term dependencies as it implements a gated recurrent neural network. This model, called Inferno Gate, is an extension of the neural architecture Inferno standing for Iterative Free-Energy Optimization of Recurrent Neural Networks with Gating or Gain-modulation. In experiments performed with an audio database of ten thousand MFCC vectors, Inferno Gate is capable of encoding efficiently and retrieving chunks of fifty items length. We then discuss the potential of our network to model the features of working memory in the PFC-BG loop for structural learning, goal-direction and hierarchical reinforcement learning.


Subject(s)
Action Potentials/physiology , Learning/physiology , Memory, Short-Term/physiology , Neural Networks, Computer , Prefrontal Cortex/physiology , Humans , Mental Recall/physiology , Neurons/physiology , Reinforcement, Psychology
9.
J Exp Biol ; 222(Pt Suppl 1)2019 02 06.
Article in English | MEDLINE | ID: mdl-30728231

ABSTRACT

Place recognition is a complex process involving idiothetic and allothetic information. In mammals, evidence suggests that visual information stemming from the temporal and parietal cortical areas ('what' and 'where' information) is merged at the level of the entorhinal cortex (EC) to build a compact code of a place. Local views extracted from specific feature points can provide information important for view cells (in primates) and place cells (in rodents) even when the environment changes dramatically. Robotics experiments using conjunctive cells merging 'what' and 'where' information related to different local views show their important role for obtaining place cells with strong generalization capabilities. This convergence of information may also explain the formation of grid cells in the medial EC if we suppose that: (1) path integration information is computed outside the EC, (2) this information is compressed at the level of the EC owing to projection (which follows a modulo principle) of cortical activities associated with discretized vector fields representing angles and/or path integration, and (3) conjunctive cells merge the projections of different modalities to build grid cell activities. Applying modulo projection to visual information allows an interesting compression of information and could explain more recent results on grid cells related to visual exploration. In conclusion, the EC could be dedicated to the build-up of a robust yet compact code of cortical activity whereas the hippocampus proper recognizes these complex codes and learns to predict the transition from one state to another.


Subject(s)
Entorhinal Cortex/physiology , Primates/physiology , Robotics , Rodentia/physiology , Animals , Models, Neurological
10.
PLoS One ; 12(3): e0173684, 2017.
Article in English | MEDLINE | ID: mdl-28282439

ABSTRACT

The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.


Subject(s)
Memory, Short-Term/physiology , Models, Neurological , Nerve Net , Algorithms , Arm , Artificial Limbs , Brain/physiology , Humans , Neural Networks, Computer , Reinforcement, Psychology , Robotics , Stochastic Processes
11.
Neural Netw ; 62: 102-11, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25240580

ABSTRACT

The so-called self-other correspondence problem in imitation demands to find the transformation that maps the motor dynamics of one partner to our own. This requires a general purpose sensorimotor mechanism that transforms an external fixation-point (partner's shoulder) reference frame to one's own body-centered reference frame. We propose that the mechanism of gain-modulation observed in parietal neurons may generally serve these types of transformations by binding the sensory signals across the modalities with radial basis functions (tensor products) on the one hand and by permitting the learning of contextual reference frames on the other hand. In a shoulder-elbow robotic experiment, gain-field neurons (GF) intertwine the visuo-motor variables so that their amplitude depends on them all. In situations of modification of the body-centered reference frame, the error detected in the visuo-motor mapping can serve then to learn the transformation between the robot's current sensorimotor space and the new one. These situations occur for instance when we turn the head on its axis (visual transformation), when we use a tool (body modification), or when we interact with a partner (embodied simulation). Our results defend the idea that the biologically-inspired mechanism of gain modulation found in parietal neurons can serve as a basic structure for achieving nonlinear mapping in spatial tasks as well as in cooperative and social functions.


Subject(s)
Motor Cortex/physiology , Motor Neurons/physiology , Parietal Lobe/physiology , Algorithms , Computer Simulation , Elbow/innervation , Elbow/physiology , Humans , Imagination/physiology , Learning/physiology , Models, Neurological , Psychomotor Performance/physiology , Robotics , Shoulder/innervation , Shoulder/physiology , Social Perception , Space Perception/physiology
12.
Front Psychol ; 4: 771, 2013.
Article in English | MEDLINE | ID: mdl-24155736

ABSTRACT

During development, infants learn to differentiate their motor behaviors relative to various contexts by exploring and identifying the correct structures of causes and effects that they can perform; these structures of actions are called task sets or internal models. The ability to detect the structure of new actions, to learn them and to select on the fly the proper one given the current task set is one great leap in infants cognition. This behavior is an important component of the child's ability of learning-to-learn, a mechanism akin to the one of intrinsic motivation that is argued to drive cognitive development. Accordingly, we propose to model a dual system based on (1) the learning of new task sets and on (2) their evaluation relative to their uncertainty and prediction error. The architecture is designed as a two-level-based neural system for context-dependent behavior (the first system) and task exploration and exploitation (the second system). In our model, the task sets are learned separately by reinforcement learning in the first network after their evaluation and selection in the second one. We perform two different experimental setups to show the sensorimotor mapping and switching between tasks, a first one in a neural simulation for modeling cognitive tasks and a second one with an arm-robot for motor task learning and switching. We show that the interplay of several intrinsic mechanisms drive the rapid formation of the neural populations with respect to novel task sets.

13.
PLoS One ; 8(7): e69474, 2013.
Article in English | MEDLINE | ID: mdl-23922718

ABSTRACT

The question whether newborns possess inborn social skills is a long debate in developmental psychology. Fetal behavioral and anatomical observations show evidences for the control of eye movements and facial behaviors during the third trimester of pregnancy whereas specific sub-cortical areas, like the superior colliculus (SC) and the striatum appear to be functionally mature to support these behaviors. These observations suggest that the newborn is potentially mature for developing minimal social skills. In this manuscript, we propose that the mechanism of sensory alignment observed in SC is particularly important for enabling the social skills observed at birth such as facial preference and facial mimicry. In a computational simulation of the maturing superior colliculus connected to a simulated facial tissue of a fetus, we model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered) and the deep somatopic layer (face-centered) in SC are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to have a sensitivity to configuration of eyes and mouth. We show that the visual and tactile maps align through a Hebbian learning stage and and strengthen their synaptic links from each other into the intermediate layer. It results that the global network produces some emergent properties such as sensitivity toward the spatial configuration of face-like patterns and the detection of eyes and mouth movement.


Subject(s)
Models, Neurological , Superior Colliculi/physiology , Visual Perception/physiology , Algorithms , Face/anatomy & histology , Face/embryology , Female , Fetus/anatomy & histology , Fetus/physiology , Humans , Infant, Newborn , Nerve Net/physiology , Neuronal Plasticity/physiology , Photic Stimulation , Pregnancy , Touch/physiology
14.
J Acoust Soc Am ; 134(1): 813-21, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23862887

ABSTRACT

In order to minimize the duration of acoustic measurements and to characterize homogeneous areas from a temporal point of view, a series of six location measurements was carried out continuously during three months in Paris. Around fifty thousand samples of 5-min, 10-min, 15-min, 20-min, 30-min, and 1-h duration measurements were extracted for each location. Each sample is characterized by eleven energy indicators and ten event descriptors. In this paper, analysis of a crossroad location is detailed. Through hierarchical ascendant classification and artificial neural networks classification, it is shown that four homogeneous periods can be detected: two during the night, one during the day, and one transition corresponding either to the awakening or to the moment when the city falls asleep. 10-min measurements are necessary to discriminate these time periods at the crossroad location. At the end of the paper, a comparison with the other locations shows that minimum duration states in between 10 and 20 min. The homogeneous periods are connected to the human activities and depend on the location. Energy indicators such as LAeq, LA10, or LA90 and event indicators are necessary to characterize the different clusters.

15.
Front Neural Circuits ; 4: 122, 2010.
Article in English | MEDLINE | ID: mdl-21151359

ABSTRACT

Hippocampal "place cells" and the precession of their extracellularly recorded spiking during traversal of a "place field" are well-established phenomena. More recent experiments describe associated entorhinal "grid cell" firing, but to date only conceptual models have been offered to explain the potential interactions among entorhinal cortex (EC) and hippocampus. To better understand not only spatial navigation, but mechanisms of episodic and semantic memory consolidation and reconsolidation, more detailed physiological models are needed to guide confirmatory experiments. Here, we report the results of a putative entorhinal-hippocampal circuit level model that incorporates recurrent asynchronous-irregular non-linear (RAIN) dynamics, in the context of recent in vivo findings showing specific intracellular-extracellular precession disparities and place field destabilization by entorhinal lesioning. In particular, during computer-simulated rodent maze navigation, our model demonstrate asymmetric ramp-like depolarization, increased theta power, and frequency (that can explain the phase precession disparity), and a role for STDP and K(AHP) channels. Additionally, we propose distinct roles for two entorhinal cell populations projecting to hippocampus. Grid cell populations transiently trigger place field activity, while tonic "suppression-generating cell" populations minimize aberrant place cell activation, and limit the number of active place cells during traversal of a given field. Applied to place-cell RAIN networks, this tonic suppression explains an otherwise seemingly discordant association with overall increased firing. The findings of this circuit level model suggest in vivo and in vitro experiments that could refute or support the proposed mechanisms of place cell dynamics and modulating influences of EC.

16.
Neural Comput ; 20(12): 2937-66, 2008 Dec.
Article in English | MEDLINE | ID: mdl-18624656

ABSTRACT

We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.


Subject(s)
Learning/physiology , Mathematics , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Nonlinear Dynamics , Action Potentials/physiology , Animals , Entropy , Feedback , Humans , Synapses/physiology , Time Factors
17.
J Physiol Paris ; 101(1-3): 136-48, 2007.
Article in English | MEDLINE | ID: mdl-18042357

ABSTRACT

The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connectivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of the Jacobian matrix. This drives the system through the "edge of chaos" where sensitivity to the input pattern is maximal. Taken together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory.


Subject(s)
Neural Networks, Computer , Neurons/physiology , Nonlinear Dynamics , Animals , Cerebral Cortex/physiology , Computer Simulation , Humans , Neuronal Plasticity , Synapses/physiology
18.
Front Neurorobot ; 1: 3, 2007.
Article in English | MEDLINE | ID: mdl-18958274

ABSTRACT

After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type "transition cells" that encompasses traditional "place cells".

19.
Biol Cybern ; 87(3): 185-98, 2002 Sep.
Article in English | MEDLINE | ID: mdl-12200614

ABSTRACT

Taking a global analogy with the structure of perceptual biological systems, we present a system composed of two layers of real-valued sigmoidal neurons. The primary layer receives stimulating spatiotemporal signals, and the secondary layer is a fully connected random recurrent network. This secondary layer spontaneously displays complex chaotic dynamics. All connections have a constant time delay. We use for our experiments a Hebbian (covariance) learning rule. This rule slowly modifies the weights under the influence of a periodic stimulus. The effect of learning is twofold: (i) it simplifies the secondary-layer dynamics, which eventually stabilizes to a periodic orbit; and (ii) it connects the secondary layer to the primary layer, and realizes a feedback from the secondary to the primary layer. This feedback signal is added to the incoming signal, and matches it (i.e., the secondary layer performs a one-step prediction of the forthcoming stimulus). After learning, a resonant behavior can be observed: the system resonates with familiar stimuli, which activates a feedback signal. In particular, this resonance allows the recognition and retrieval of partial signals, and dynamic maintenance of the memory of past stimuli. This resonance is highly sensitive to the temporal relationships and to the periodicity of the presented stimuli. When we present stimuli which do not match in time or space, the feedback remains silent. The number of different stimuli for which resonant behavior can be learned is analyzed. As with Hopfield networks, the capacity is proportional to the size of the second, recurrent layer. Moreover, the high capacity displayed allows the implementation of our model on real-time systems interacting with their environment. Such an implementation is reported in the case of a simple behavior-based recognition task on a mobile robot. Finally, we present some functional analogies with biological systems in terms of autonomy and dynamic binding, and present some hypotheses on the computational role of feedback connections.


Subject(s)
Brain/physiology , Neural Networks, Computer , Psychomotor Performance/physiology , Robotics , Conditioning, Psychological/physiology , Feedback, Physiological/physiology , Motor Neurons/physiology , Neurons, Afferent/physiology , Perception/physiology , Recognition, Psychology/physiology
20.
Behav Brain Sci ; 24(6): 1051-1053, 2001 Dec.
Article in English | MEDLINE | ID: mdl-18241362

ABSTRACT

As models of living beings acting in a real world biorobots undergo an accelerated "philogenic" complexification. The first efficient robots performed simple animal behaviours (e.g., those of ants, crickets) and later on isolated elementary behaviours of complex beings. The increasing complexity of the tasks robots are dedicated to is matched by an increasing complexity and versatility of the architectures now supporting conditioning or even elementary planning.

SELECTION OF CITATIONS
SEARCH DETAIL
...