Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Psychophysiology ; : e14587, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38600626

ABSTRACT

Cognitive processes deal with contradictory demands in social contexts. On the one hand, social interactions imply a demand for cooperation, which requires processing social signals, and on the other, demands for selective attention require ignoring irrelevant signals, to avoid overload. We created a task with a humanoid robot displaying irrelevant social signals, imposing conflicting demands on selective attention. Participants interacted with the robot as a team (high social demand; n = 23) or a passive co-actor (low social demand; n = 19). We observed that theta oscillations indexed conflict processing of social signals. Subsequently, alpha oscillations were sensitive to the conflicting social signals and the mode of interaction. These findings suggest that brains have distinct mechanisms for dealing with the complexity of social interaction and that these mechanisms are activated differently depending on the mode of the interaction. Thus, how we process environmental stimuli depends on the beliefs held regarding our social context.

2.
Sci Rep ; 13(1): 16708, 2023 10 04.
Article in English | MEDLINE | ID: mdl-37794045

ABSTRACT

When interacting with groups of robots, we tend to perceive them as a homogenous group where all group members have similar capabilities. This overgeneralization of capabilities is potentially due to a lack of perceptual experience with robots or a lack of motivation to see them as individuals (i.e., individuation). This can undermine trust and performance in human-robot teams. One way to overcome this issue is by designing robots that can be individuated such that each team member can be provided tasks based on its actual skills. In two experiments, we examine if humans can effectively individuate robots: Experiment 1 (n = 225) investigates how individuation performance of robot stimuli compares to that of human stimuli that either belong to a social ingroup or outgroup. Experiment 2 (n = 177) examines to what extent robots' physical human-likeness (high versus low) affects individuation performance. Results show that although humans are able to individuate robots, they seem to individuate them to a lesser extent than both ingroup and outgroup human stimuli (Experiment 1). Furthermore, robots that are physically more humanlike are initially individuated better compared to robots that are physically less humanlike; this effect, however, diminishes over the course of the experiment, suggesting that the individuation of robots can be learned quite quickly (Experiment 2). Whether differences in individuation performance with robot versus human stimuli is primarily due to a reduced perceptual experience with robot stimuli or due to motivational aspects (i.e., robots as potential social outgroup) should be examined in future studies.


Subject(s)
Facial Recognition , Robotics , Humans , Learning , Motivation , Trust
3.
Sci Rep ; 13(1): 11689, 2023 07 19.
Article in English | MEDLINE | ID: mdl-37468517

ABSTRACT

Joint attention is a pivotal mechanism underlying human ability to interact with one another. The fundamental nature of joint attention in the context of social cognition has led researchers to develop tasks that address this mechanism and operationalize it in a laboratory setting, in the form of a gaze cueing paradigm. In the present study, we addressed the question of whether engaging in joint attention with a robot face is culture-specific. We adapted a classical gaze-cueing paradigm such that a robot avatar cued participants' gaze subsequent to either engaging participants in eye contact or not. Our critical question of interest was whether the gaze cueing effect (GCE) is stable across different cultures, especially if cognitive resources to exert top-down control are reduced. To achieve the latter, we introduced a mathematical stress task orthogonally to the gaze cueing protocol. Results showed larger GCE in the Singapore sample, relative to the Italian sample, independent of gaze type (eye contact vs. no eye contact) or amount of experienced stress, which translates to available cognitive resources. Moreover, since after each block, participants rated how engaged they felt with the robot avatar during the task, we observed that Italian participants rated as more engaging the avatar during the eye contact blocks, relative to no eye contact while Singaporean participants did not show any difference in engagement relative to the gaze. We discuss the results in terms of cultural differences in robot-induced joint attention, and engagement in eye contact, as well as the dissociation between implicit and explicit measures related to processing of gaze.


Subject(s)
Interpersonal Relations , Robotics , Humans , Attention , Cues , Emotions , Fixation, Ocular
4.
J Cogn Neurosci ; 35(10): 1670-1680, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37432740

ABSTRACT

Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used TMS to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze before shifting its gaze. Before the task, participants received either sham stimulation (baseline), stimulation of right TPJ (rTPJ), or dorsomedial prefrontal cortex (dmPFC). Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dmPFC stimulation eliminated the socially driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.


Subject(s)
Attention , Prefrontal Cortex , Humans , Reaction Time/physiology , Attention/physiology , Communication , Cues , Fixation, Ocular
5.
Acta Psychol (Amst) ; 228: 103660, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35779453

ABSTRACT

When we read fiction, we encounter characters that interact in the story. As such, we encode that information and comprehend the stories. Prior studies suggest that this comprehension process is facilitated by taking the perspective of characters during reading. Thus, two questions of interest are whether people take the perspective of characters that are not perceived as capable of experiencing perspectives (e.g., robots), and whether current models of language comprehension can explain these differences between human and nonhuman protagonists (or lack thereof) during reading. The study aims to (1) compare the situation model (i.e., a model that factors in a protagonist's perspective) and the RI-VAL model (which relies more on comparisons of newly acquired information with information stored in long term memory) and (2) investigate whether differences in accessibility of information differ based on adopting the intentional stance towards a robot. To address the aims of our study, we designed a preregistered experiment in which participants read stories about one of three protagonists (an intentional robot, a mechanistic robot and a human) and answered questions about objects that were either occluded or not occluded from the protagonist's view. Based on the situation model, we expected faster responses to items that were not occluded compared to those that were occluded (i.e., the occlusion effect). However, based on the RI-VAL model, we expected overall differences between the protagonists would arise due to inconsistency with general world knowledge. The results of the pre-registered analysis showed no differences between the protagonists, nor differences in occlusion. However, a post-hoc analysis showed that the occlusion effect was shown only for the intentional robot but not for the human, nor mechanistic robot. Results also showed that depending on the age of the readers, the RI-VAL or the situation model is able to explain the results such that older participants "simulated" the situation about which they read (situation model), while younger adults compared new information with information stored in long-term memory (RI-VAL model). This suggests that comparing to information in long term memory is cognitively more costly. Therefore, with older adults used less cognitively demanding strategy of simulation.


Subject(s)
Reading , Robotics , Aged , Comprehension/physiology , Humans
6.
Front Neuroergon ; 3: 838136, 2022.
Article in English | MEDLINE | ID: mdl-38235447

ABSTRACT

As technological advances progress, we find ourselves in situations where we need to collaborate with artificial agents (e.g., robots, autonomous machines and virtual agents). For example, autonomous machines will be part of search and rescue missions, space exploration and decision aids during monitoring tasks (e.g., baggage-screening at the airport). Efficient communication in these scenarios would be crucial to interact fluently. While studies examined the positive and engaging effect of social signals (i.e., gaze communication) on human-robot interaction, little is known about the effects of conflicting robot signals on the human actor's cognitive load. Moreover, it is unclear from a social neuroergonomics perspective how different brain regions synchronize or communicate with one another to deal with the cognitive load induced by conflicting signals in social situations with robots. The present study asked if neural oscillations that correlate with conflict processing are observed between brain regions when participants view conflicting robot signals. Participants classified different objects based on their color after a robot (i.e., iCub), presented on a screen, simulated handing over the object to them. The robot proceeded to cue participants (with a head shift) to the correct or incorrect target location. Since prior work has shown that unexpected cues can interfere with oculomotor planning and induces conflict, we expected that conflicting robot social signals which would interfere with the execution of actions. Indeed, we found that conflicting social signals elicited neural correlates of cognitive conflict as measured by mid-brain theta oscillations. More importantly, we found higher coherence values between mid-frontal electrode locations and posterior occipital electrode locations in the theta-frequency band for incongruent vs. congruent cues, which suggests that theta-band synchronization between these two regions allows for communication between cognitive control systems and gaze-related attentional mechanisms. We also find correlations between coherence values and behavioral performance (Reaction Times), which are moderated by the congruency of the robot signal. In sum, the influence of irrelevant social signals during goal-oriented tasks can be indexed by behavioral, neural oscillation and brain connectivity patterns. These data provide insights about a new measure for cognitive load, which can also be used in predicting human interaction with autonomous machines.

7.
J Cogn Neurosci ; 34(1): 108-126, 2021 12 06.
Article in English | MEDLINE | ID: mdl-34705044

ABSTRACT

Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. Although gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. In the present study, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot "moved" the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 ERP component of the EEG signals as well as higher event-related spectral perturbation amplitudes (Study 3) for incongruent trials compared with congruent trials. Our findings reveal that behavioral, ocular, and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.


Subject(s)
Robotics , Cognition , Cues , Electroencephalography , Humans , Reaction Time
8.
J Cogn ; 4(1): 28, 2021 May 31.
Article in English | MEDLINE | ID: mdl-34131624

ABSTRACT

Social agents rely on the ability to use feedback to learn and modify their behavior. The extent to which this happens in social contexts depends on motivational, cognitive and/or affective parameters. For instance, feedback-associated learning occurs at different rates when the outcome of an action (e.g., winning or losing in a gambling task) affects oneself ("Self") versus another human ("Other"). Here, we examine whether similar context effects on feedback-associated learning can also be observed when the "other" is a social robot (here: Cozmo). We additionally examine whether a "hybrid" version of the gambling paradigm, where participants are free to engage in a dynamic interaction with a robot, then move to a controlled screen-based experiment can be used to examine social cognition in human-robot interaction. This hybrid method is an alternative to current designs where researchers examine the effect of the interaction on social cognition during the interaction with the robot. For that purpose, three groups of participants (n total = 60) interacted with Cozmo over different time periods (no interaction vs. a single 20 minute interaction in the lab vs. daily 20 minute interactions over five consecutive days at home) before performing the gambling task in the lab. The results indicate that prior interactions impact the degree to which participants benefit from feedback during the gambling task, with overall worse learning immediately after short-term interactions with the robot and better learning in the "Self" versus "Other" condition after repeated interactions with the robot. These results indicate that "hybrid" paradigms are a suitable option to investigate social cognition in human-robot interaction when a fully dynamic implementation (i.e., interaction and measurement dynamic) is not feasible.

10.
Cogn Affect Behav Neurosci ; 21(4): 763-775, 2021 08.
Article in English | MEDLINE | ID: mdl-33821460

ABSTRACT

Social species rely on the ability to modulate feedback-monitoring in social contexts to adjust one's actions and obtain desired outcomes. When being awarded positive outcomes during a gambling task, feedback-monitoring is attenuated when strangers are rewarded, as less value is assigned to the awarded outcome. This difference in feedback-monitoring can be indexed by an event-related potential (ERP) component known as the Reward Positivity (RewP), whose amplitude is enhanced when receiving positive feedback. While the degree of familiarity influences the RewP, little is known about how the RewP and reinforcement learning are affected when gambling on behalf of familiar versus nonfamiliar agents, such as robots. This question becomes increasingly important given that robots may be used as teachers and/or social companions in the near future, with whom children and adults will interact with for short or long periods of time. In the present study, we examined whether feedback-monitoring when gambling on behalf of oneself compared with a robot is impacted by whether participants have familiarized themselves with the robot before the task. We expected enhanced RewP amplitude for self versus other for those who did not familiarize with the robot and that self-other differences in the RewP would be attenuated for those who familiarized with the robot. Instead, we observed that the RewP was larger when familiarization with the robot occurred, which corresponded to overall worse learning outcomes. We additionally observed an enhanced P3 effect for the high-familiarity condition, which suggests an increased motivation to reward. These findings suggest that familiarization with robots may cause a positive motivational effect, which positively affects RewP amplitudes, but interferes with learning.


Subject(s)
Robotics , Adult , Child , Electroencephalography , Evoked Potentials , Feedback , Humans , Reward , Social Interaction
11.
Front Psychol ; 11: 2234, 2020.
Article in English | MEDLINE | ID: mdl-33013584

ABSTRACT

Understanding and reacting to others' nonverbal social signals, such as changes in gaze direction (i.e., gaze cue), are essential for social interactions, as it is important for processes such as joint attention and mentalizing. Although attentional orienting in response to gaze cues has a strong reflexive component, accumulating evidence shows that it can be top-down controlled by context information regarding the signals' social relevance. For example, when a gazer is believed to be an entity "with a mind" (i.e., mind perception), people exert more top-down control on attention orienting. Although increasing an agent's physical human-likeness can enhance mind perception, it could have negative consequences on top-down control of social attention when a gazer's physical appearance is categorically ambiguous (i.e., difficult to categorize as human or nonhuman), as resolving this ambiguity would require using cognitive resources that otherwise could be used to top-down control attention orienting. To examine this question, we used mouse-tracking to explore if categorically ambiguous agents are associated with increased processing costs (Experiment 1), whether categorically ambiguous stimuli negatively impact top-down control of social attention (Experiment 2), and if resolving the conflict related to the agent's categorical ambiguity (using exposure) would restore top-down control to orient attention (Experiment 3). The findings suggest that categorically ambiguous stimuli are associated with cognitive conflict, which negatively impact the ability to exert top-down control on attentional orienting in a counterpredicitive gaze-cueing paradigm; this negative impact, however, is attenuated when being pre-exposed to the stimuli prior to the gaze-cueing task. Taken together, these findings suggest that manipulating physical human-likeness is a powerful way to affect mind perception in human-robot interaction (HRI) but has a diminishing returns effect on social attention when it is categorically ambiguous due to drainage of cognitive resources and impairment of top-down control.

12.
Front Robot AI ; 7: 565825, 2020.
Article in English | MEDLINE | ID: mdl-33501328

ABSTRACT

Gaze behavior is an important social signal between humans as it communicates locations of interest. People typically orient their attention to where others look as this informs about others' intentions and future actions. Studies have shown that humans can engage in similar gaze behavior with robots but presumably more so when they adopt the intentional stance toward them (i.e., believing robot behaviors are intentional). In laboratory settings, the phenomenon of attending toward the direction of others' gaze has been examined with the use of the gaze-cueing paradigm. While the gaze-cueing paradigm has been successful in investigating the relationship between adopting the intentional stance toward robots and attention orienting to gaze cues, it is unclear if the repetitiveness of the gaze-cueing paradigm influences adopting the intentional stance. Here, we examined if the duration of exposure to repetitive robot gaze behavior in a gaze-cueing task has a negative impact on subjective attribution of intentionality. Participants performed a short, medium, or long face-to-face gaze-cueing paradigm with an embodied robot while subjective ratings were collected pre and post the interaction. Results show that participants in the long exposure condition had the smallest change in their intention attribution scores, if any, while those in the short exposure condition had a positive change in their intention attribution, indicating that participants attributed more intention to the robot after short interactions. The results also show that attention orienting to robot gaze-cues was positively related to how much intention was attributed to the robot, but this relationship became more negative as the length of exposure increased. In contrast to subjective ratings, the gaze-cueing effects (GCEs) increased as a function of the duration of exposure to repetitive behavior. The data suggest a tradeoff between the desired number of trials needed for observing various mechanisms of social cognition, such as GCEs, and the likelihood of adopting the intentional stance toward a robot.

13.
Philos Trans R Soc Lond B Biol Sci ; 374(1771): 20180430, 2019 04 29.
Article in English | MEDLINE | ID: mdl-30852996

ABSTRACT

In social interactions, we rely on non-verbal cues like gaze direction to understand the behaviour of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behaviour, a process called mind perception. While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e. attentional orienting to gaze cues) with either a human or a robot agent (i.e. manipulation of mind perception) while transcranial direct current stimulation (tDCS) was applied to prefrontal and temporo-parietal brain areas. The results show that temporo-parietal stimulation did not modulate mechanisms of social attention, neither in response to the human nor in response to the robot agent, whereas prefrontal stimulation enhanced attentional orienting in response to human gaze cues and attenuated attentional orienting in response to robot gaze cues. The findings suggest that mind perception modulates low-level mechanisms of social cognition via prefrontal structures, and that a certain degree of mind perception is essential in order for prefrontal stimulation to affect mechanisms of social attention. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Interpersonal Relations , Prefrontal Cortex/physiology , Robotics , Transcranial Direct Current Stimulation , Adult , Cues , Female , Humans , Male , Orientation/physiology , Virginia , Young Adult
14.
Front Hum Neurosci ; 12: 309, 2018.
Article in English | MEDLINE | ID: mdl-30147648

ABSTRACT

With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm's degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.

15.
Cogn Affect Behav Neurosci ; 18(5): 837-856, 2018 10.
Article in English | MEDLINE | ID: mdl-29992485

ABSTRACT

In social interactions, we rely on nonverbal cues like gaze direction to understand the behavior of others. How we react to these cues is affected by whether they are believed to originate from an entity with a mind, capable of having internal states (i.e., mind perception). While prior work has established a set of neural regions linked to social-cognitive processes like mind perception, the degree to which activation within this network relates to performance in subsequent social-cognitive tasks remains unclear. In the current study, participants performed a mind perception task (i.e., judging the likelihood that faces, varying in physical human-likeness, have internal states) while event-related fMRI was collected. Afterwards, participants performed a social attention task outside the scanner, during which they were cued by the gaze of the same faces that they previously judged within the mind perception task. Parametric analyses of the fMRI data revealed that activity within ventromedial prefrontal cortex (vmPFC) was related to both mind ratings inside the scanner and gaze-cueing performance outside the scanner. In addition, other social brain regions were related to gaze-cueing performance, including frontal areas like the left insula, dorsolateral prefrontal cortex, and inferior frontal gyrus, as well as temporal areas like the left temporo-parietal junction and bilateral temporal gyri. The findings suggest that functions subserved by the vmPFC are relevant to both mind perception and social attention, implicating a role of vmPFC in the top-down modulation of low-level social-cognitive processes.


Subject(s)
Brain/physiology , Cognition/physiology , Social Perception , Theory of Mind/physiology , Attention/physiology , Brain/diagnostic imaging , Brain Mapping , Eye Movements , Female , Humans , Judgment/physiology , Magnetic Resonance Imaging , Male , Young Adult
16.
Front Psychol ; 8: 1393, 2017.
Article in English | MEDLINE | ID: mdl-28878703

ABSTRACT

Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

SELECTION OF CITATIONS
SEARCH DETAIL
...