Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
iScience ; 24(12): 103505, 2021 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-34934924

RESUMO

Competition for social influence is a major force shaping societies, from baboons guiding their troop in different directions, to politicians competing for voters, to influencers competing for attention on social media. Social influence is invariably a competitive exercise with multiple influencers competing for it. We study which strategy maximizes social influence under competition. Applying game theory to a scenario where two advisers compete for the attention of a client, we find that the rational solution for advisers is to communicate truthfully when favored by the client, but to lie when ignored. Across seven pre-registered studies, testing 802 participants, such a strategic adviser consistently outcompeted an honest adviser. Strategic dishonesty outperformed truth-telling in swaying individual voters, the majority vote in anonymously voting groups, and the consensus vote in communicating groups. Our findings help explain the success of political movements that thrive on disinformation, and vocal underdog politicians with no credible program.

2.
iScience ; 24(6): 102679, 2021 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-34189440

RESUMO

We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies.

3.
Med Health Care Philos ; 24(3): 329-340, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33733389

RESUMO

An effective method to increase the number of potential cadaveric organ donors is to make people donors by default with the option to opt out. This non-coercive public policy tool to influence people's choices is often justified on the basis of the as-judged-by-themselves principle: people are nudged into choosing what they themselves truly want. We review three often hypothesized reasons for why defaults work and argue that the as-judged-by-themselves principle may hold only in two of these cases. We specify further conditions for when the principle can hold in these cases and show that whether those conditions are met is often unclear. We recommend ways to expand nationwide surveys to identify the actual reasons for why defaults work and discuss mandated choice policy as a viable solution to many arising conundrums.


Assuntos
Obtenção de Tecidos e Órgãos , Humanos , Política Pública , Inquéritos e Questionários , Doadores de Tecidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...