Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
Add more filters










Publication year range
1.
Artif Life ; : 1-14, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38805661

ABSTRACT

Several simulation models have demonstrated how flocking behavior emerges from the interaction among individuals that react to the relative orientation of their neighbors based on simple rules. However, the precise nature of these rules and the relationship between the characteristics of the rules and the efficacy of the resulting collective behavior are unknown. In this article, we analyze the effect of the strength with which individuals react to the orientation of neighbors located in different sectors of their visual fields and the benefit that could be obtained by using control rules that are more elaborate than those normally used. Our results demonstrate that considering only neighbors located on the frontal side of the visual field permits an increase in the aggregation level of the swarm. Using more complex rules and/or additional sensory information does not lead to better performance.

2.
Evol Comput ; : 1-18, 2023 Jun 30.
Article in English | MEDLINE | ID: mdl-37390220

ABSTRACT

Exposing an Evolutionary Algorithm that is used to evolve robot controllers to variable conditions is necessary to obtain solutions which are robust and can cross the reality gap. However, we do not yet have methods for analyzing and understanding the impact of the varying morphological conditions which impact the evolutionary process, and therefore for choosing suitable variation ranges. By morphological conditions, we refer to the starting state of the robot, and to variations in its sensor readings during operation due to noise. In this article, we introduce a method that permits us to measure the impact of these morphological variations and we analyze the relation between the amplitude of variations, the modality with which they are introduced, and the performance and robustness of evolving agents. Our results demonstrate that (i) the evolutionary algorithm can tolerate morphological variations which have a very high impact, (ii) variations affecting the actions of the agent are tolerated much better than variations affecting the initial state of the agent or of the environment, and (iii) improving the accuracy of the fitness measure through multiple evaluations is not always useful. Moreover, our results show that morphological variations permit generating solutions which perform better both in varying and non-varying conditions.

3.
Front Robot AI ; 9: 1020462, 2022.
Article in English | MEDLINE | ID: mdl-36353578
4.
Front Robot AI ; 9: 994485, 2022.
Article in English | MEDLINE | ID: mdl-36267423

ABSTRACT

The propensity of evolutionary algorithms to generate compact solutions have advantages and disadvantages. On one side, compact solutions can be cheaper, lighter, and faster than less compact ones. On the other hand, compact solutions might lack evolvability, i.e. might have a lower probability to improve as a result of genetic variations. In this work we study the relation between phenotypic complexity and evolvability in the case of soft-robots with varying morphology. We demonstrate a correlation between phenotypic complexity and evolvability. We demonstrate that the tendency to select compact solutions originates from the fact that the fittest robots often correspond to phenotypically simple robots which are robust to genetic variations but lack evolvability. Finally, we demonstrate that the efficacy of the evolutionary process can be improved by increasing the probability of genetic variations which produce a complexification of the agents' phenotype or by using absolute mutation rates.

5.
PLoS One ; 16(4): e0250040, 2021.
Article in English | MEDLINE | ID: mdl-33857220

ABSTRACT

The efficacy of evolutionary or reinforcement learning algorithms for continuous control optimization can be enhanced by including an additional neural network dedicated to features extraction trained through self-supervision. In this paper we introduce a method that permits to continue the training of the features extracting network during the training of the control network. We demonstrate that the parallel training of the two networks is crucial in the case of agents that operate on the basis of egocentric observations and that the extraction of features provides an advantage also in problems that do not benefit from dimensionality reduction. Finally, we compare different feature extracting methods and we show that sequence-to-sequence learning outperforms the alternative methods considered in previous studies.


Subject(s)
Learning , Neural Networks, Computer , Algorithms , Humans
6.
Sci Rep ; 11(1): 8985, 2021 04 26.
Article in English | MEDLINE | ID: mdl-33903698

ABSTRACT

We demonstrate how the evolutionary training of embodied agents can be extended with a curriculum learning algorithm that automatically selects the environmental conditions in which the evolving agents are evaluated. The environmental conditions are selected to adjust the level of difficulty to the ability level of the current evolving agents, and to challenge the weaknesses of the evolving agents. The method does not require domain knowledge and does not introduce additional hyperparameters. The results collected on two benchmark problems, that require to solve a task in significantly varying environmental conditions, demonstrate that the method proposed outperforms conventional learning methods and generates solutions which are robust to variations and able to cope with different environmental conditions.

7.
Artif Life ; 26(4): 409-430, 2020.
Article in English | MEDLINE | ID: mdl-33284663

ABSTRACT

The possibility of using competitive evolutionary algorithms to generate long-term progress is normally prevented by the convergence on limit cycle dynamics in which the evolving agents keep progressing against their current competitors by periodically rediscovering solutions adopted previously. This leads to local but not to global progress (i.e., progress against all possible competitors). We propose a new competitive algorithm that produces long-term global progress by identifying and filtering out opportunistic variations, that is, variations leading to progress against current competitors and retrogression against other competitors. The efficacy of the method is validated on the coevolution of predator and prey robots, a classic problem that has been used in related researches. The accumulation of global progress over many generations leads to effective solutions that involve the production of articulated behaviors. The complexity of the behavior displayed by the evolving robots increases across generations, although progress in performance is not always accompanied by behavior complexification.


Subject(s)
Algorithms , Biological Evolution , Robotics
8.
Front Robot AI ; 7: 98, 2020.
Article in English | MEDLINE | ID: mdl-33501265

ABSTRACT

We analyze the efficacy of modern neuro-evolutionary strategies for continuous control optimization. Overall, the results collected on a wide variety of qualitatively different benchmark problems indicate that these methods are generally effective and scale well with respect to the number of parameters and the complexity of the problem. Moreover, they are relatively robust with respect to the setting of hyper-parameters. The comparison of the most promising methods indicates that the OpenAI-ES algorithm outperforms or equals the other algorithms on all considered problems. Moreover, we demonstrate how the reward functions optimized for reinforcement learning methods are not necessarily effective for evolutionary strategies and vice versa. This finding can lead to reconsideration of the relative efficacy of the two classes of algorithm since it implies that the comparisons performed to date are biased toward one or the other class.

9.
PLoS One ; 14(3): e0213193, 2019.
Article in English | MEDLINE | ID: mdl-30822316

ABSTRACT

We propose a method for evolving neural network controllers robust with respect to variations of the environmental conditions (i.e. that can operate effectively in new conditions immediately, without the need to adapt to variations). The method specifies how the fitness of candidate solutions can be evaluated, how the environmental conditions should vary during the course of the evolutionary process, which algorithm can be used, and how the best solution can be identified. The obtained results show how the method proposed is effective and computational tractable. It allows to improve performance on an extended version of the double-pole balancing problem, outperform the best available human-designed controllers on a car racing problem, and generate effective solutions for a swarm robotic problem. The comparison of different algorithms indicates that the CMA-ES and xNES methods, that operate by optimizing a distribution of parameters, represent the best options for the evolution of robust neural network controllers.


Subject(s)
Evolution, Molecular , Models, Theoretical , Neural Networks, Computer , Gene-Environment Interaction
10.
Entropy (Basel) ; 21(4)2019 Mar 30.
Article in English | MEDLINE | ID: mdl-33267064

ABSTRACT

How do living organisms decide and act with limited and uncertain information? Here, we discuss two computational approaches to solving these challenging problems: a "cognitive" and a "sensorimotor" enrichment of stimuli, respectively. In both approaches, the key notion is that agents can strategically modulate their behavior in informative ways, e.g., to disambiguate amongst alternative hypotheses or to favor the perception of stimuli providing the information necessary to later act appropriately. We discuss how, despite their differences, both approaches appeal to the notion that actions must obey both epistemic (i.e., information-gathering or uncertainty-reducing) and pragmatic (i.e., goal- or reward-maximizing) imperatives and balance them. Our computationally-guided analysis reveals that epistemic behavior is fundamental to understanding several facets of cognitive processing, including perception, decision making, and social interaction.

11.
PLoS One ; 13(7): e0198788, 2018.
Article in English | MEDLINE | ID: mdl-30020942

ABSTRACT

In this paper we compare systematically the most promising neuroevolutionary methods and two new original methods on the double-pole balancing problem with respect to: the ability to discover solutions that are robust to variations of the environment, the speed with which such solutions are found, and the ability to scale-up to more complex versions of the problem. The results indicate that the two original methods introduced in this paper and the Exponential Natural Evolutionary Strategy method largely outperform the other methods with respect to all considered criteria. The results collected in different experimental conditions also reveal the importance of regulating the selective pressure and the importance of exposing evolving agents to variable environmental conditions. The data collected and the results of the comparisons are used to identify the most effective methods and the most promising research directions.


Subject(s)
Biological Evolution , Central Nervous System , Motor Neurons/physiology , Nerve Net/physiology , Algorithms , Animals , Humans
12.
Artif Life ; 24(4): 277-295, 2018.
Article in English | MEDLINE | ID: mdl-30681913

ABSTRACT

Previous evolutionary studies demonstrated how robust solutions can be obtained by evaluating agents multiple times in variable environmental conditions. Here we demonstrate how agents evolved in environments that vary across generations outperform agents evolved in environments that remain fixed. Moreover, we demonstrate that best performance is obtained when the environment varies at a moderate rate across generations, that is, when the environment does not vary every generation but every N generations. The advantage of exposing evolving agents to environments that vary across generations at a moderate rate is due, at least in part, to the fact that this condition maximizes the retention of changes that alter the behavior of the agents, which in turn facilitates the discovery of better solutions. Finally, we demonstrate that moderate environmental variations are advantageous also from an evolutionary computation perspective, that is, from the perspective of maximizing the performance that can be achieved within a limited computational budget.


Subject(s)
Biological Evolution , Environment , Models, Biological , Computational Biology , Computer Simulation
13.
PLoS One ; 11(11): e0166174, 2016.
Article in English | MEDLINE | ID: mdl-27846301

ABSTRACT

In this paper we show how a multilayer neural network trained to master a context-dependent task in which the action co-varies with a certain stimulus in a first context and with a second stimulus in an alternative context exhibits selective attention, i.e. filtering out of irrelevant information. This effect is rather robust and it is observed in several variations of the experiment in which the characteristics of the network as well as of the training procedure have been varied. Our result demonstrates how the filtering out of irrelevant information can originate spontaneously as a consequence of the regularities present in context-dependent training set and therefore does not necessarily depend on specific architectural constraints. The post-evaluation of the network in an instructed-delay experimental scenario shows how the behaviour of the network is consistent with the data collected in neuropsychological studies. The analysis of the network at the end of the training process indicates how selective attention originates as a result of the effects caused by relevant and irrelevant stimuli mediated by context-dependent and context-independent bidirectional associations between stimuli and actions that are extracted by the network during the learning.


Subject(s)
Attention/physiology , Cognition/physiology , Models, Biological , Neurons/physiology , Animals , Color Vision/physiology , Discrimination Learning/physiology , Haplorhini/physiology , Humans , Neural Networks, Computer , Reaction Time/physiology , Vision, Ocular/physiology
14.
Sci Rep ; 6: 32785, 2016 09 12.
Article in English | MEDLINE | ID: mdl-27616139

ABSTRACT

The relative rarity of reciprocity in nature, contrary to theoretical predictions that it should be widespread, is currently one of the major puzzles in social evolution theory. Here we use evolutionary robotics to solve this puzzle. We show that models based on game theory are misleading because they neglect the mechanics of behavior. In a series of experiments with simulated robots controlled by artificial neural networks, we find that reciprocity does not evolve, and show that this results from a general constraint that likely also prevents it from evolving in the wild. Reciprocity can evolve if it requires very few mutations, as is usually assumed in evolutionary game theoretic models, but not if, more realistically, it requires the accumulation of many adaptive mutations.

15.
PLoS One ; 11(8): e0160679, 2016.
Article in English | MEDLINE | ID: mdl-27505162

ABSTRACT

We investigate the relation between the development of reactive and cognitive capabilities. In particular we investigate whether the development of reactive capabilities prevents or promotes the development of cognitive capabilities in a population of evolving robots that have to solve a time-delay navigation task in a double T-Maze environment. Analysis of the experiments reveals that the evolving robots always select reactive strategies that rely on cognitive offloading, i.e., the possibility of acting so as to encode onto the relation between the agent and the environment the states that can be used later to regulate the agent's behavior. The discovery of these strategies does not prevent, but rather facilitates, the development of cognitive strategies that also rely on the extraction and use of internal states. Detailed analysis of the results obtained in the different experimental conditions provides evidence that helps clarify why, contrary to expectations, reactive and cognitive strategies tend to have synergetic relationships.


Subject(s)
Cognition , Neural Networks, Computer , Robotics
16.
PLoS One ; 11(7): e0158627, 2016.
Article in English | MEDLINE | ID: mdl-27409589

ABSTRACT

We demonstrate how the need to cope with operational faults enables evolving circuits to find more fit solutions. The analysis of the results obtained in different experimental conditions indicates that, in absence of faults, evolution tends to select circuits that are small and have low phenotypic variability and evolvability. The need to face operation faults, instead, drives evolution toward the selection of larger circuits that are truly robust with respect to genetic variations and that have a greater level of phenotypic variability and evolvability. Overall our results indicate that the need to cope with operation faults leads to the selection of circuits that have a greater probability to generate better circuits as a result of genetic variation with respect to a control condition in which circuits are not subjected to faults.


Subject(s)
Biological Evolution , Computational Biology/instrumentation , Computational Biology/methods , Computer Simulation , Genetics, Population , Models, Genetic , Evolution, Molecular , Gene Regulatory Networks , Genetic Variation , Genotype , Phenotype , Selection, Genetic
17.
Theory Biosci ; 135(4): 201-216, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27443311

ABSTRACT

In this paper, we show how the development of plastic behaviours, i.e., behaviour displaying a modular organisation characterised by behavioural subunits that are alternated in a context-dependent manner, can enable evolving robots to solve their adaptive task more efficiently also when it does not require the accomplishment of multiple conflicting functions. The comparison of the results obtained in different experimental conditions indicates that the most important prerequisites for the evolution of behavioural plasticity are: the possibility to generate and perceive affordances (i.e., opportunities for behaviour execution), the possibility to rely on flexible regulatory processes that exploit both external and internal cues, and the possibility to realise smooth and effective transitions between behaviours.


Subject(s)
Machine Learning , Neural Networks, Computer , Robotics/methods , Algorithms , Animals , Behavior, Animal , Computer Simulation , Probability , Time Factors
18.
Artif Life ; 22(3): 319-52, 2016.
Article in English | MEDLINE | ID: mdl-27472415

ABSTRACT

Coevolving systems are notoriously difficult to understand. This is largely due to the Red Queen effect that dictates heterospecific fitness interdependence. In simulation studies of coevolving systems, master tournaments are often used to obtain more informed fitness measures by testing evolved individuals against past and future opponents. However, such tournaments still contain certain ambiguities. We introduce the use of a phenotypic cluster analysis to examine the distribution of opponent categories throughout an evolutionary sequence. This analysis, adopted from widespread usage in the bioinformatics community, can be applied to master tournament data. This allows us to construct behavior-based category trees, obtaining a hierarchical classification of phenotypes that are suspected to interleave during cyclic evolution. We use the cluster data to establish the existence of switching-genes that control opponent specialization, suggesting the retention of dormant genetic adaptations, that is, genetic memory. Our overarching goal is to reiterate how computer simulations may have importance to the broader understanding of evolutionary dynamics in general. We emphasize a further shift from a component-driven to an interaction-driven perspective in understanding coevolving systems. As yet, it is unclear how the sudden development of switching-genes relates to the gradual emergence of genetic adaptability. Likely, context genes gradually provide the appropriate genetic environment wherein the switching-gene effect can be exploited.


Subject(s)
Biological Evolution , Computer Simulation , Adaptation, Biological/genetics , Computational Biology , Phenotype
20.
Cogn Process ; 16 Suppl 1: 393-7, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26232191

ABSTRACT

The objects present in our environment evoke multiple conflicting actions at every moment. Thus, a mechanism that resolves this conflict is needed in order to avoid the production of chaotic ineffective behaviours. A plausible candidate for such role is the selective attention, capable of inhibiting the neural representations of the objects irrelevant in the ongoing context and as a consequence the actions they afford. In this paper, we investigated whether a selective attention mechanism emerges spontaneously during the learning of context-dependent behaviour, whereas most neurocomputational models of selective attention and action selection imply the presence of architectural constraints. To this aim, we trained a deep neural network to learn context-dependent visual-action associations. Our main result was the spontaneous emergence of an inhibitory mechanism aimed to solve conflicts between multiple afforded actions by directly suppressing the irrelevant visual stimuli eliciting the incorrect actions for the current context. This suggests that such an inhibitory mechanism emerged as a result of the incorporation of context-independent probabilistic regularities occurring between stimuli and afforded actions.


Subject(s)
Attention/physiology , Models, Neurological , Neural Networks, Computer , Spatial Learning/physiology , Computer Simulation , Humans , Physical Stimulation , Reaction Time/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...