Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Behav Res Methods ; 2024 May 23.
Article in English | MEDLINE | ID: mdl-38782872

ABSTRACT

In the last decade, scientists investigating human social cognition have started bringing traditional laboratory paradigms more "into the wild" to examine how socio-cognitive mechanisms of the human brain work in real-life settings. As this implies transferring 2D observational paradigms to 3D interactive environments, there is a risk of compromising experimental control. In this context, we propose a methodological approach which uses humanoid robots as proxies of social interaction partners and embeds them in experimental protocols that adapt classical paradigms of cognitive psychology to interactive scenarios. This allows for a relatively high degree of "naturalness" of interaction and excellent experimental control at the same time. Here, we present two case studies where our methods and tools were applied and replicated across two different laboratories, namely the Italian Institute of Technology in Genova (Italy) and the Agency for Science, Technology and Research in Singapore. In the first case study, we present a replication of an interactive version of a gaze-cueing paradigm reported in Kompatsiari et al. (J Exp Psychol Gen 151(1):121-136, 2022). The second case study presents a replication of a "shared experience" paradigm reported in Marchesi et al. (Technol Mind Behav 3(3):11, 2022). As both studies replicate results across labs and different cultures, we argue that our methods allow for reliable and replicable setups, even though the protocols are complex and involve social interaction. We conclude that our approach can be of benefit to the research field of social cognition and grant higher replicability, for example, in cross-cultural comparisons of social cognition mechanisms.

2.
Cortex ; 169: 249-258, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37956508

ABSTRACT

Previous work shows that in some instances artificial agents, such as robots, can elicit higher-order socio-cognitive mechanisms, similar to those elicited by humans. This suggests that these socio-cognitive mechanisms, such as mentalizing processes, originally developed for interaction with other humans, might be flexibly (re-)used, or "hijacked", for approaching this new category of interaction partners (Wykowska, 2020). In this study, we set out to identify neural markers of such flexible reuse of socio-cognitive mechanisms. We focused on fronto-parietal theta synchronization, as it has been proposed to be a substrate of cognitive flexibility in general (Fries, 2005). We analyzed EEG data from two experiments (Bossi et al., 2020; Roselli et al., submitted), in which participants completed a test measuring their individual likelihood to adopt the intentional stance towards robots, the intentional stance (IST) test. Our results show that participants with higher scores on the IST, indicating that they had higher likelihood of adopting the intentional stance towards a robot, had a significantly higher theta synchronization value, relative to participants with lower scores on the IST. These results suggest that long-range synchronization in the theta band might be a marker socio-cognitive process that can be flexibly applied towards non-human agents, such as robots.


Subject(s)
Cognition , Theta Rhythm , Humans , Electroencephalography
3.
Sci Rep ; 13(1): 11689, 2023 07 19.
Article in English | MEDLINE | ID: mdl-37468517

ABSTRACT

Joint attention is a pivotal mechanism underlying human ability to interact with one another. The fundamental nature of joint attention in the context of social cognition has led researchers to develop tasks that address this mechanism and operationalize it in a laboratory setting, in the form of a gaze cueing paradigm. In the present study, we addressed the question of whether engaging in joint attention with a robot face is culture-specific. We adapted a classical gaze-cueing paradigm such that a robot avatar cued participants' gaze subsequent to either engaging participants in eye contact or not. Our critical question of interest was whether the gaze cueing effect (GCE) is stable across different cultures, especially if cognitive resources to exert top-down control are reduced. To achieve the latter, we introduced a mathematical stress task orthogonally to the gaze cueing protocol. Results showed larger GCE in the Singapore sample, relative to the Italian sample, independent of gaze type (eye contact vs. no eye contact) or amount of experienced stress, which translates to available cognitive resources. Moreover, since after each block, participants rated how engaged they felt with the robot avatar during the task, we observed that Italian participants rated as more engaging the avatar during the eye contact blocks, relative to no eye contact while Singaporean participants did not show any difference in engagement relative to the gaze. We discuss the results in terms of cultural differences in robot-induced joint attention, and engagement in eye contact, as well as the dissociation between implicit and explicit measures related to processing of gaze.


Subject(s)
Interpersonal Relations , Robotics , Humans , Attention , Cues , Emotions , Fixation, Ocular
4.
Sci Rep ; 12(1): 14924, 2022 Sep 02.
Article in English | MEDLINE | ID: mdl-36056165

ABSTRACT

How individuals interpret robots' actions is a timely question in the context of the general approach to increase robot's presence in human social environment in the decades to come. Facing robots, people might have a tendency to explain their actions in mentalistic terms, granting them intentions. However, how default or controllable this process is still under debate. In four experiments, we asked participants to choose between mentalistic (intentional) and mechanistic (non-intentional) descriptions to describe depicted actions of a robot in various scenarios. Our results show the primacy of mentalistic descriptions that are processed faster than mechanistic ones (experiment 1). This effect was even stronger under high vs low cognitive load when people had to decide between the two alternatives (experiment 2). Interestingly, while there was no effect of cognitive load at the later stages of the processing arguing for controllability (experiment 3), imposing cognitive load on participants at an early stage of observation resulted in a faster attribution of mentalistic properties to the robot (experiment 4). We discuss these results in the context of the idea that social cognition is a default system.


Subject(s)
Mentalization , Robotics , Cognition , Humans , Social Environment , Social Perception
5.
Front Robot AI ; 9: 863319, 2022.
Article in English | MEDLINE | ID: mdl-36093211

ABSTRACT

Anthropomorphism describes the tendency to ascribe human characteristics to nonhuman agents. Due to the increased interest in social robotics, anthropomorphism has become a core concept of human-robot interaction (HRI) studies. However, the wide use of this concept resulted in an interchangeability of its definition. In the present study, we propose an integrative framework of anthropomorphism (IFA) encompassing three levels: cultural, individual general tendencies, and direct attributions of human-like characteristics to robots. We also acknowledge the Western bias of the state-of-the-art view of anthropomorphism and develop a cross-cultural approach. In two studies, participants from various cultures completed tasks and questionnaires assessing their animism beliefs, individual tendencies to endow robots with mental properties, spirit, and consider them as more or less human. We also evaluated their attributions of mental anthropomorphic characteristics towards robots (i.e., cognition, emotion, intention). Our results demonstrate, in both experiments, that a three-level model (as hypothesized in the IFA) reliably explains the collected data. We found an overall influence of animism (cultural level) on the two lower levels, and an influence of the individual tendencies to mentalize, spiritualize and humanize (individual level) on the attribution of cognition, emotion and intention. In addition, in Experiment 2, the analyses show a more anthropocentric view of the mind for Western than East-Asian participants. As such, Western perception of robots depends more on humanization while East-Asian on mentalization. We further discuss these results in relation to the anthropomorphism literature and argue for the use of integrative cross-cultural model in HRI research.

6.
Front Robot AI ; 8: 666586, 2021.
Article in English | MEDLINE | ID: mdl-34692776

ABSTRACT

In human-robot interactions, people tend to attribute to robots mental states such as intentions or desires, in order to make sense of their behaviour. This cognitive strategy is termed "intentional stance". Adopting the intentional stance influences how one will consider, engage and behave towards robots. However, people differ in their likelihood to adopt intentional stance towards robots. Therefore, it seems crucial to assess these interindividual differences. In two studies we developed and validated the structure of a task aiming at evaluating to what extent people adopt intentional stance towards robot actions, the Intentional Stance task (IST). The Intentional Stance Task consists in a task that probes participants' stance by requiring them to choose the plausibility of a description (mentalistic vs. mechanistic) of behaviour of a robot depicted in a scenario composed of three photographs. Results showed a reliable psychometric structure of the IST. This paper therefore concludes with the proposal of using the IST as a proxy for assessing the degree of adoption of the intentional stance towards robots.

7.
Front Robot AI ; 8: 653537, 2021.
Article in English | MEDLINE | ID: mdl-34222350

ABSTRACT

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants' pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.

8.
Sci Robot ; 5(46)2020 09 30.
Article in English | MEDLINE | ID: mdl-32999049

ABSTRACT

The increasing presence of robots in society necessitates a deeper understanding into what attitudes people have toward robots. People may treat robots as mechanistic artifacts or may consider them to be intentional agents. This might result in explaining robots' behavior as stemming from operations of the mind (intentional interpretation) or as a result of mechanistic design (mechanistic interpretation). Here, we examined whether individual attitudes toward robots can be differentiated on the basis of default neural activity pattern during resting state, measured with electroencephalogram (EEG). Participants observed scenarios in which a humanoid robot was depicted performing various actions embedded in daily contexts. Before they were introduced to the task, we measured their resting state EEG activity. We found that resting state EEG beta activity differentiated people who were later inclined toward interpreting robot behaviors as either mechanistic or intentional. This pattern is similar to the pattern of activity in the default mode network, which was previously demonstrated to have a social role. In addition, gamma activity observed when participants were making decisions about a robot's behavior indicates a relationship between theory of mind and said attitudes. Thus, we provide evidence that individual biases toward treating robots as either intentional agents or mechanistic artifacts can be detected at the neural level, already in a resting state EEG signal.


Subject(s)
Attitude , Brain/physiology , Robotics/instrumentation , Adult , Beta Rhythm/physiology , Electroencephalography , Female , Gamma Rhythm/physiology , Humans , Male , Prejudice , Rest/physiology , Task Performance and Analysis , Young Adult
9.
Front Psychol ; 10: 450, 2019.
Article in English | MEDLINE | ID: mdl-30930808

ABSTRACT

In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others' behavior with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple's Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance toward an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants' stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behavior of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased toward the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance toward artificial agents, at least in some contexts.

10.
Front Psychol ; 9: 70, 2018.
Article in English | MEDLINE | ID: mdl-29459842

ABSTRACT

Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

SELECTION OF CITATIONS
SEARCH DETAIL
...