Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Proc Biol Sci ; 291(2025): 20232493, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38889792

ABSTRACT

Direct reciprocity is a mechanism for the evolution of cooperation in repeated social interactions. According to the literature, individuals naturally learn to adopt conditionally cooperative strategies if they have multiple encounters with their partner. Corresponding models have greatly facilitated our understanding of cooperation, yet they often make strong assumptions on how individuals remember and process payoff information. For example, when strategies are updated through social learning, it is commonly assumed that individuals compare their average payoffs. This would require them to compute (or remember) their payoffs against everyone else in the population. To understand how more realistic constraints influence direct reciprocity, we consider the evolution of conditional behaviours when individuals learn based on more recent experiences. Even in the most extreme case that they only take into account their very last interaction, we find that cooperation can still evolve. However, such individuals adopt less generous strategies, and they cooperate less often than in the classical setup with average payoffs. Interestingly, once individuals remember the payoffs of two or three recent interactions, cooperation rates quickly approach the classical limit. These findings contribute to a literature that explores which kind of cognitive capabilities are required for reciprocal cooperation. While our results suggest that some rudimentary form of payoff memory is necessary, it suffices to remember a few interactions.


Subject(s)
Biological Evolution , Cooperative Behavior , Memory , Animals , Humans
2.
Philos Trans R Soc Lond B Biol Sci ; 378(1876): 20210508, 2023 05 08.
Article in English | MEDLINE | ID: mdl-36934760

ABSTRACT

Evolutionary game theory is a truly interdisciplinary subject that goes well beyond the limits of biology. Mathematical minds get hooked up in simple models for evolution and often gradually move into other parts of evolutionary biology or ecology. Social scientists realize how much they can learn from evolutionary thinking and gradually transfer insight that was originally generated in biology. Computer scientists can use their algorithms to explore a new field where machines not only learn from the environment, but also from each other. The breadth of the field and the focus on a few very popular issues, such as cooperation, comes at a price: several insights are re-discovered in different fields under different labels with different heroes and modelling traditions. For example, reciprocity or spatial structure are treated differently. Will we continue to develop things in parallel? Or can we converge to a single set of ideas, a single tradition and eventually a single software repository? Or will these fields continue to cross-fertilize each other, learning from each other and engaging in a constructive exchange between fields? Ultimately, the popularity of evolutionary game theory rests not only on its explanatory power, but also on the intuitive character of its models. This article is part of the theme issue 'Half a century of evolutionary games: a synthesis of theory, application and future directions'.


Subject(s)
Biological Evolution , Game Theory , Ecology , Algorithms , Software , Cooperative Behavior , Models, Theoretical
3.
Sci Rep ; 10(1): 17287, 2020 10 14.
Article in English | MEDLINE | ID: mdl-33057134

ABSTRACT

Memory-one strategies are a set of Iterated Prisoner's Dilemma strategies that have been praised for their mathematical tractability and performance against single opponents. This manuscript investigates best response memory-one strategies with a theory of mind for their opponents. The results add to the literature that has shown that extortionate play is not always optimal by showing that optimal play is often not extortionate. They also provide evidence that memory-one strategies suffer from their limited memory in multi agent interactions and can be out performed by optimised strategies with longer memory. We have developed a theory that has allowed to explore the entire space of memory-one strategies. The framework presented is suitable to study memory-one strategies in the Prisoner's Dilemma, but also in evolutionary processes such as the Moran process. Furthermore, results on the stability of defection in populations of memory-one strategies are also obtained.


Subject(s)
Memory , Theory of Mind , Biological Evolution , Humans , Models, Theoretical , Prisoner Dilemma
4.
PLoS One ; 13(10): e0204981, 2018.
Article in English | MEDLINE | ID: mdl-30359381

ABSTRACT

We present insights and empirical results from an extensive numerical study of the evolutionary dynamics of the iterated prisoner's dilemma. Fixation probabilities for Moran processes are obtained for all pairs of 164 different strategies including classics such as TitForTat, zero determinant strategies, and many more sophisticated strategies. Players with long memories and sophisticated behaviours outperform many strategies that perform well in a two player setting. Moreover we introduce several strategies trained with evolutionary algorithms to excel at the Moran process. These strategies are excellent invaders and resistors of invasion and in some cases naturally evolve handshaking mechanisms to resist invasion. The best invaders were those trained to maximize total payoff while the best resistors invoke handshake mechanisms. This suggests that while maximizing individual payoff can lead to the evolution of cooperation through invasion, the relatively weak invasion resistance of payoff maximizing strategies are not as evolutionarily stable as strategies employing handshake mechanisms.


Subject(s)
Decision Making , Algorithms , Cooperative Behavior , Humans
5.
PLoS One ; 12(12): e0188046, 2017.
Article in English | MEDLINE | ID: mdl-29228001

ABSTRACT

We present tournament results and several powerful strategies for the Iterated Prisoner's Dilemma created using reinforcement learning techniques (evolutionary and particle swarm algorithms). These strategies are trained to perform well against a corpus of over 170 distinct opponents, including many well-known and classic strategies. All the trained strategies win standard tournaments against the total collection of other opponents. The trained strategies and one particular human made designed strategy are the top performers in noisy tournaments also.


Subject(s)
Learning , Prisoner Dilemma , Algorithms , Game Theory , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...