Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Science ; 378(6623): 990-996, 2022 12 02.
Article in English | MEDLINE | ID: mdl-36454847

ABSTRACT

We introduce DeepNash, an autonomous agent that plays the imperfect information game Stratego at a human expert level. Stratego is one of the few iconic board games that artificial intelligence (AI) has not yet mastered. It is a game characterized by a twin challenge: It requires long-term strategic thinking as in chess, but it also requires dealing with imperfect information as in poker. The technique underpinning DeepNash uses a game-theoretic, model-free deep reinforcement learning method, without search, that learns to master Stratego through self-play from scratch. DeepNash beat existing state-of-the-art AI methods in Stratego and achieved a year-to-date (2022) and all-time top-three ranking on the Gravon games platform, competing with human expert players.


Subject(s)
Artificial Intelligence , Reinforcement, Psychology , Video Games , Humans
2.
J Theor Biol ; 242(4): 818-31, 2006 Oct 21.
Article in English | MEDLINE | ID: mdl-16843499

ABSTRACT

In this paper we introduce a mathematical model of naming games. Naming games have been widely used within research on the origins and evolution of language. Despite the many interesting empirical results these studies have produced, most of this research lacks a formal elucidating theory. In this paper we show how a population of agents can reach linguistic consensus, i.e. learn to use one common language to communicate with one another. Our approach differs from existing formal work in two important ways: one, we relax the too strong assumption that an agent samples infinitely often during each time interval. This assumption is usually made to guarantee convergence of an empirical learning process to a deterministic dynamical system. Two, we provide a proof that under these new realistic conditions, our model converges to a common language for the entire population of agents. Finally the model is experimentally validated.


Subject(s)
Consensus , Linguistics , Models, Psychological , Biological Evolution , Game Theory , Humans , Language , Learning , Social Environment
3.
J Theor Biol ; 235(4): 566-82, 2005 Aug 21.
Article in English | MEDLINE | ID: mdl-15935174

ABSTRACT

Evolutionary game dynamics have been proposed as a mathematical framework for the cultural evolution of language and more specifically the evolution of vocabulary. This article discusses a model that is mutually exclusive in its underlying principals with some previously suggested models. The model describes how individuals in a population culturally acquire a vocabulary by actively participating in the acquisition process instead of passively observing and communicate through peer-to-peer interactions instead of vertical parent-offspring relations. Concretely, a notion of social/cultural learning called the naming game is first abstracted using learning theory. This abstraction defines the required cultural transmission mechanism for an evolutionary process. Second, the derived transmission system is expressed in terms of the well-known selection-mutation model defined in the context of evolutionary dynamics. In this way, the analogy between social learning and evolution at the level of meaning-word associations is made explicit. Although only horizontal and oblique transmission structures will be considered, extensions to vertical structures over different genetic generations can easily be incorporated. We provide a number of simplified experiments to clarify our reasoning.


Subject(s)
Biological Evolution , Game Theory , Models, Psychological , Vocabulary , Humans , Learning , Social Environment
SELECTION OF CITATIONS
SEARCH DETAIL
...