Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters











Publication year range
1.
iScience ; 26(11): 108063, 2023 Nov 17.
Article in English | MEDLINE | ID: mdl-37915597

ABSTRACT

The difficulties associated with solving Humanity's major global challenges have increasingly led world leaders and everyday citizens to publicly adopt strong emotional responses, with either mixed or unknown impacts on others' actions. Here, we present two experiments showing that non-verbal emotional expressions in group interactions play a critical role in determining how individuals behave when contributing to public goods entailing future and uncertain returns. Participants' investments were not only shaped by emotional expressions but also enhanced by anger when compared with joy. Our results suggest that global coordination may benefit from interaction in which emotion expressions can be paramount.

2.
Trends Cogn Sci ; 26(2): 174-187, 2022 02.
Article in English | MEDLINE | ID: mdl-34955426

ABSTRACT

Deep learning (DL) is being successfully applied across multiple domains, yet these models learn in a most artificial way: they require large quantities of labeled data to grasp even simple concepts. Thus, the main bottleneck is often access to supervised data. Here, we highlight a trend in a potential solution to this challenge: synthetic data. Synthetic data are becoming accessible due to progress in rendering pipelines, generative adversarial models, and fusion models. Moreover, advancements in domain adaptation techniques help close the statistical gap between synthetic and real data. Paradoxically, this artificial solution is also likely to enable more natural learning, as seen in biological systems, including continual, multimodal, and embodied learning. Complementary to this, simulators and deep neural networks (DNNs) will also have a critical role in providing insight into the cognitive and neural functioning of biological systems. We also review the strengths of, and opportunities and novel challenges associated with, synthetic data.


Subject(s)
Deep Learning , Humans , Neural Networks, Computer
3.
iScience ; 24(3): 102228, 2021 Mar 19.
Article in English | MEDLINE | ID: mdl-33644708

ABSTRACT

Autonomous machines are poised to become pervasive, but most treat machines differently: we are willing to violate social norms and less likely to display altruism toward machines. Here, we report an unexpected effect that those impacted by COVID-19-as measured by a post-traumatic stress disorder scale-show a sharp reduction in this difference. Participants engaged in the dictator game with humans and machines and, consistent with prior research on disasters, those impacted by COVID-19 displayed more altruism to other humans. Unexpectedly, participants impacted by COVID-19 displayed equal altruism toward human and machine partners. A mediation analysis suggests that altruism toward machines was explained by an increase in heuristic thinking-reinforcing prior theory that heuristic thinking encourages people to treat machines like people-and faith in technology-perhaps reflecting long-term consequences on how we act with machines. These findings give insight, but also raise concerns, for the design of technology.

4.
iScience ; 24(3): 102141, 2021 Mar 19.
Article in English | MEDLINE | ID: mdl-33665560

ABSTRACT

The emergence of pro-social behaviors remains a key open challenge across disciplines. In this context, there is growing evidence that expressing emotions may foster human cooperation. However, it remains unclear how emotions shape individual choices and interact with other cooperation mechanisms. Here, we provide a comprehensive experimental analysis of the interplay of emotion expressions with two important mechanisms: direct and indirect reciprocity. We show that cooperation in an iterated prisoner's dilemma emerges from the combination of the opponent's initial reputation, past behaviors, and emotion expressions. Moreover, all factors influenced the social norm adopted when assessing the action of others - i.e., how their counterparts' reputations are updated - thus, reflecting longer-term consequences. We expose a new class of emotion-based social norms, where emotions are used to forgive those that defect but also punish those that cooperate. These findings emphasize the importance of emotion expressions in fostering, directly and indirectly, cooperation in society.

5.
Front Psychol ; 11: 554706, 2020.
Article in English | MEDLINE | ID: mdl-33281659

ABSTRACT

Recent times have seen increasing interest in conversational assistants (e.g., Amazon Alexa) designed to help users in their daily tasks. In military settings, it is critical to design assistants that are, simultaneously, helpful and able to minimize the user's cognitive load. Here, we show that embodiment plays a key role in achieving that goal. We present an experiment where participants engaged in an augmented reality version of the relatively well-known desert survival task. Participants were paired with a voice assistant, an embodied assistant, or no assistant. The assistants made suggestions verbally throughout the task, whereas the embodied assistant further used gestures and emotion to communicate with the user. Our results indicate that both assistant conditions led to higher performance over the no assistant condition, but the embodied assistant achieved this with less cognitive burden on the decision maker than the voice assistant, which is a novel contribution. We discuss implications for the design of intelligent collaborative systems for the warfighter.

6.
Sci Rep ; 10(1): 14959, 2020 09 11.
Article in English | MEDLINE | ID: mdl-32917943

ABSTRACT

The iterated prisoner's dilemma has been used to study human cooperation for decades. The recent discovery of extortion and generous strategies renewed interest on the role of strategy in shaping behavior in this dilemma. But what if players could perceive each other's emotional expressions? Despite increasing evidence that emotion signals influence decision making, the effects of emotion in this dilemma have been mostly neglected. Here we show that emotion expressions moderate the effect of generous strategies, increasing or reducing cooperation according to the intention communicated by the signal; in contrast, expressions by extortionists had no effect on participants' behavior, revealing a limitation of highly competitive strategies. We provide evidence that these effects are mediated mostly by inferences about other's intentions made from strategy and emotion. These findings provide insight into the value, as well as the limits, of behavioral strategies and emotion signals for cooperation.


Subject(s)
Cooperative Behavior , Decision Making/physiology , Emotions/physiology , Prisoner Dilemma , Adolescent , Adult , Female , Humans , Male , Middle Aged
7.
Front Robot AI ; 7: 572529, 2020.
Article in English | MEDLINE | ID: mdl-34212006

ABSTRACT

As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others' behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.

8.
PLoS One ; 14(11): e0224758, 2019.
Article in English | MEDLINE | ID: mdl-31710610

ABSTRACT

As machines that act autonomously on behalf of others-e.g., robots-become integral to society, it is critical we understand the impact on human decision-making. Here we show that people readily engage in social categorization distinguishing humans ("us") from machines ("them"), which leads to reduced cooperation with machines. However, we show that a simple cultural cue-the ethnicity of the machine's virtual face-mitigated this bias for participants from two distinct cultures (Japan and United States). We further show that situational cues of affiliative intent-namely, expressions of emotion-overrode expectations of coalition alliances from social categories: When machines were from a different culture, participants showed the usual bias when competitive emotion was shown (e.g., joy following exploitation); in contrast, participants cooperated just as much with humans as machines that expressed cooperative emotion (e.g., joy following cooperation). These findings reveal a path for increasing cooperation in society through autonomous machines.


Subject(s)
Artificial Intelligence , Culture , Decision Making , Emotions , Judgment , Adolescent , Adult , Aged , Female , Humans , Japan , Male , Middle Aged , United States , Young Adult
9.
Proc Natl Acad Sci U S A ; 116(9): 3482-3487, 2019 02 26.
Article in English | MEDLINE | ID: mdl-30808742

ABSTRACT

Recent times have seen an emergence of intelligent machines that act autonomously on our behalf, such as autonomous vehicles. Despite promises of increased efficiency, it is not clear whether this paradigm shift will change how we decide when our self-interest (e.g., comfort) is pitted against the collective interest (e.g., environment). Here we show that acting through machines changes the way people solve these social dilemmas and we present experimental evidence showing that participants program their autonomous vehicles to act more cooperatively than if they were driving themselves. We show that this happens because programming causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals. We also show that the programmed behavior is influenced by past experience. Finally, we report evidence that the effect generalizes beyond the domain of autonomous vehicles. We discuss implications for designing autonomous machines that contribute to a more cooperative society.


Subject(s)
Automobile Driving/psychology , Cooperative Behavior , Interpersonal Relations , Adolescent , Adult , Female , Game Theory , Humans , Male , Middle Aged , Reward , Young Adult
10.
J Pers Soc Psychol ; 106(1): 73-88, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24079297

ABSTRACT

How do people make inferences about other people's minds from their emotion displays? The ability to infer others' beliefs, desires, and intentions from their facial expressions should be especially important in interdependent decision making when people make decisions from beliefs about the others' intention to cooperate. Five experiments tested the general proposition that people follow principles of appraisal when making inferences from emotion displays, in context. Experiment 1 revealed that the same emotion display produced opposite effects depending on context: When the other was competitive, a smile on the other's face evoked a more negative response than when the other was cooperative. Experiment 2 revealed that the essential information from emotion displays was derived from appraisals (e.g., Is the current state of affairs conducive to my goals? Who is to blame for it?); facial displays of emotion had the same impact on people's decision making as textual expressions of the corresponding appraisals. Experiments 3, 4, and 5 used multiple mediation analyses and a causal-chain design: Results supported the proposition that beliefs about others' appraisals mediate the effects of emotion displays on expectations about others' intentions. We suggest a model based on appraisal theories of emotion that posits an inferential mechanism whereby people retrieve, from emotion expressions, information about others' appraisals, which then lead to inferences about others' mental states. This work has implications for the design of algorithms that drive agent behavior in human-agent strategic interaction, an emerging domain at the interface of computer science and social psychology.


Subject(s)
Decision Making/physiology , Emotions/physiology , Interpersonal Relations , Social Behavior , Theory of Mind/physiology , Adolescent , Adult , Aged , Competitive Behavior/physiology , Cooperative Behavior , Facial Expression , Female , Humans , Male , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL