Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
2.
Phys Life Rev ; 28: 1-21, 2019 03.
Article in English | MEDLINE | ID: mdl-30072239

ABSTRACT

Human communication is a traditional topic of research in many disciplines such as psychology, linguistics and philosophy, all of which mainly focused on language, gestures and deictics. However, these do not constitute the sole channels of communication, especially during online social interaction, where instead an additional critical role may be played by sensorimotor communication (SMC). SMC refers here to (often subtle) communicative signals embedded within pragmatic actions - for example, a soccer player carving his body movements in ways that inform a partner about his intention, or to feint an adversary; or the many ways we offer a glass of wine, rudely or politely. SMC is a natural form of communication that does not require any prior convention or any specific code. It amounts to the continuous and flexible exchange of bodily signals, with or without awareness, to enhance coordination success; and it is versatile, as sensorimotor signals can be embedded within every action. SMC is at the center of recent interest in neuroscience, cognitive psychology, human-robot interaction and experimental semiotics; yet, we still lack a coherent and comprehensive synthesis to account for its multifaceted nature. Some fundamental questions remain open, such as which interactive scenarios promote or do not promote SMC, what aspects of social interaction can be properly called communicative and which ones entail a mere transfer of information, and how many forms of SMC exist and what we know (or still don't know) about them from an empirical viewpoint. The present work brings together all these separate strands of research within a unified overarching, multidisciplinary framework for SMC, which combines evidence from kinematic studies of human-human interaction and computational modeling of social exchanges.


Subject(s)
Brain/physiology , Communication , Gestures , Interpersonal Relations , Models, Theoretical , Somatosensory Cortex/physiology , Biomechanical Phenomena , Humans , Language
3.
Biol Cybern ; 111(2): 165-183, 2017 04.
Article in English | MEDLINE | ID: mdl-28265753

ABSTRACT

Turn-taking is a preverbal skill whose mastering constitutes an important precondition for many social interactions and joint actions. However, the cognitive mechanisms supporting turn-taking abilities are still poorly understood. Here, we propose a computational analysis of turn-taking in terms of two general mechanisms supporting joint actions: action prediction (e.g., recognizing the interlocutor's message and predicting the end of turn) and signaling (e.g., modifying one's own speech to make it more predictable and discriminable). We test the hypothesis that in a simulated conversational scenario dyads using these two mechanisms can recognize the utterances of their co-actors faster, which in turn permits them to give and take turns more efficiently. Furthermore, we discuss how turn-taking dynamics depend on the fact that agents cannot simultaneously use their internal models for both action (or messages) prediction and production, as these have different requirements-or, in other words, they cannot speak and listen at the same time with the same level of accuracy. Our results provide a computational-level characterization of turn-taking in terms of cognitive mechanisms of action prediction and signaling that are shared across various interaction and joint action domains.


Subject(s)
Hearing , Speech , Humans , Interpersonal Relations , Models, Statistical
4.
Front Psychol ; 8: 237, 2017.
Article in English | MEDLINE | ID: mdl-28280475

ABSTRACT

Humans excel at recognizing (or inferring) another's distal intentions, and recent experiments suggest that this may be possible using only subtle kinematic cues elicited during early phases of movement. Still, the cognitive and computational mechanisms underlying the recognition of intentional (sequential) actions are incompletely known and it is unclear whether kinematic cues alone are sufficient for this task, or if it instead requires additional mechanisms (e.g., prior information) that may be more difficult to fully characterize in empirical studies. Here we present a computationally-guided analysis of the execution and recognition of intentional actions that is rooted in theories of motor control and the coarticulation of sequential actions. In our simulations, when a performer agent coarticulates two successive actions in an action sequence (e.g., "reach-to-grasp" a bottle and "grasp-to-pour"), he automatically produces kinematic cues that an observer agent can reliably use to recognize the performer's intention early on, during the execution of the first part of the sequence. This analysis lends computational-level support for the idea that kinematic cues may be sufficiently informative for early intention recognition. Furthermore, it suggests that the social benefits of coarticulation may be a byproduct of a fundamental imperative to optimize sequential actions. Finally, we discuss possible ways a performer agent may combine automatic (coarticulation) and strategic (signaling) ways to facilitate, or hinder, an observer's action recognition processes.

5.
Psychol Sci ; 28(3): 338-345, 2017 03.
Article in English | MEDLINE | ID: mdl-28103140

ABSTRACT

Using a lifting and balancing task, we contrasted two alternative views of planning joint actions: one postulating that joint action involves distinct predictions for self and other, the other postulating that joint action involves coordinated plans between the coactors and reuse of bimanual models. We compared compensatory movements required to keep a tray balanced when 2 participants lifted glasses from each other's trays at the same time (simultaneous joint action) and when they took turns lifting (sequential joint action). Compared with sequential joint action, simultaneous joint action made it easier to keep the tray balanced. Thus, in keeping with the view that bimanual models are reused for joint action, predicting the timing of their own lifting action helped participants compensate for another person's lifting action. These results raise the possibility that simultaneous joint actions do not necessarily require distinguishing between one's own and the coactor's contributions to the action plan and may afford an agent-neutral stance.


Subject(s)
Cooperative Behavior , Motor Activity/physiology , Psychomotor Performance/physiology , Adult , Humans , Young Adult
6.
Biol Cybern ; 109(4-5): 453-67, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26168854

ABSTRACT

Recent theories of mindreading explain the recognition of action, intention, and belief of other agents in terms of generative architectures that model the causal relations between observables (e.g., observed movements) and their hidden causes (e.g., action goals and beliefs). Two kinds of probabilistic generative schemes have been proposed in cognitive science and robotics that link to a "theory theory" and "simulation theory" of mindreading, respectively. The former compares perceived actions to optimal plans derived from rationality principles and conceptual theories of others' minds. The latter reuses one's own internal (inverse and forward) models for action execution to perform a look-ahead mental simulation of perceived actions. Both theories, however, leave one question unanswered: how are the generative models - including task structure and parameters - learned in the first place? We start from Dennett's "intentional stance" proposal and characterize it within generative theories of action and intention recognition. We propose that humans use an intentional stance as a learning bias that sidesteps the (hard) structure learning problem and bootstraps the acquisition of generative models for others' actions. The intentional stance corresponds to a candidate structure in the generative scheme, which encodes a simplified belief-desire folk psychology and a hierarchical intention-to-action organization of behavior. This simple structure can be used as a proxy for the "true" generative structure of others' actions and intentions and is continuously grown and refined - via state and parameter learning - during interactions. In turn - as our computational simulations show - this can help solve mindreading problems and bootstrap the acquisition of useful causal models of both one's own and others' goal-directed actions.


Subject(s)
Intention , Learning/physiology , Models, Psychological , Recognition, Psychology , Algorithms , Computer Simulation , Humans
8.
PLoS One ; 8(11): e79876, 2013.
Article in English | MEDLINE | ID: mdl-24278201

ABSTRACT

Although the importance of communication is recognized in several disciplines, it is rarely studied in the context of online social interactions and joint actions. During online joint actions, language and gesture are often insufficient and humans typically use non-verbal, sensorimotor forms of communication to send coordination signals. For example, when playing volleyball, an athlete can exaggerate her movements to signal her intentions to her teammates (say, a pass to the right) or to feint an adversary. Similarly, a person who is transporting a table together with a co-actor can push the table in a certain direction to signal where and when he intends to place it. Other examples of "signaling" are over-articulating in noisy environments and over-emphasizing vowels in child-directed speech. In all these examples, humans intentionally modify their action kinematics to make their goals easier to disambiguate. At the moment no formal theory exists of these forms of sensorimotor communication and signaling. We present one such theory that describes signaling as a combination of a pragmatic and a communicative action, and explains how it simplifies coordination in online social interactions. We cast signaling within a "joint action optimization" framework in which co-actors optimize the success of their interaction and joint goals rather than only their part of the joint action. The decision of whether and how much to signal requires solving a trade-off between the costs of modifying one's behavior and the benefits in terms of interaction success. Signaling is thus an intentional strategy that supports social interactions; it acts in concert with automatic mechanisms of resonance, prediction, and imitation, especially when the context makes actions and intentions ambiguous and difficult to read. Our theory suggests that communication dynamics should be studied within theories of coordination and interaction rather than only in terms of the maximization of information transmission.


Subject(s)
Communication , Internet , Interpersonal Relations , Models, Theoretical , Motor Activity , Somatosensory Cortex/physiology , Biomechanical Phenomena , Humans
9.
Behav Brain Sci ; 36(4): 371-2, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23790004

ABSTRACT

Pickering & Garrod (P&G) explain dialogue dynamics in terms of forward modeling and prediction-by-simulation mechanisms. Their theory dissolves a strict segregation between production and comprehension processes, and it links dialogue to action-based theories of joint action. We propose that the theory can also incorporate intentional strategies that increase communicative success: for example, signaling strategies that help remaining predictable and forming common ground.


Subject(s)
Comprehension/physiology , Models, Theoretical , Speech Perception/physiology , Speech/physiology , Humans
10.
Exp Brain Res ; 211(3-4): 613-30, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21559745

ABSTRACT

Studies on how "the social mind" works reveal that cognitive agents engaged in joint actions actively estimate and influence another's cognitive variables and form shared representations with them. (How) do shared representations enhance coordination? In this paper, we provide a probabilistic model of joint action that emphasizes how shared representations help solving interaction problems. We focus on two aspects of the model. First, we discuss how shared representations permit to coordinate at the level of cognitive variables (beliefs, intentions, and actions) and determine a coherent unfolding of action execution and predictive processes in the brains of two agents. Second, we discuss the importance of signaling actions as part of a strategy for sharing representations and the active guidance of another's actions toward the achievement of a joint goal. Furthermore, we present data from a human-computer experiment (the Tower Game) in which two agents (human and computer) have to build together a tower made of colored blocks, but only the human knows the constellation of the tower to be built (e.g., red-blue-red-blue-…). We report evidence that humans use signaling strategies that take another's uncertainty into consideration, and that in turn our model is able to use humans' actions as cues to "align" its representations and to select complementary actions.


Subject(s)
Cooperative Behavior , Interpersonal Relations , Problem Solving , Cognition , Humans , Models, Psychological , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...