Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 10977, 2024 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-38744967

RESUMO

People rely on search engines for information in critical contexts, such as public health emergencies-but what makes people trust some search results more than others? Can search engines influence people's levels of trust by controlling how information is presented? And, how does the presence of misinformation influence people's trust? Research has identified both rank and the presence of misinformation as factors impacting people's search behavior. Here, we extend these findings by measuring the effects of these factors, as well as misinformation warning banners, on the perceived trustworthiness of individual search results. We conducted three online experiments (N = 3196) using Covid-19-related queries, and found that although higher-ranked results are clicked more often, they are not more trusted. We also showed that misinformation does not damage trust in accurate results displayed below it. In contrast, while a warning about unreliable sources might decrease trust in misinformation, it significantly decreases trust in accurate information. This research alleviates some concerns about how people evaluate the credibility of information they find online, while revealing a potential backfire effect of one misinformation-prevention approach; namely, that banner warnings about source unreliability could lead to unexpected and nonoptimal outcomes in which people trust accurate information less.


Assuntos
COVID-19 , Comunicação , Confiança , Humanos , Confiança/psicologia , COVID-19/epidemiologia , COVID-19/prevenção & controle , COVID-19/psicologia , Feminino , Masculino , Adulto , Ferramenta de Busca , SARS-CoV-2/isolamento & purificação , Comportamento de Busca de Informação , Adulto Jovem , Pessoa de Meia-Idade
3.
Sci Rep ; 13(1): 5487, 2023 04 04.
Artigo em Inglês | MEDLINE | ID: mdl-37015964

RESUMO

Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AI's negative effects on society the social consequences of using it to communicate remain largely unexplored. We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions ("smart replies"), which are used to send billions of messages each day. Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways. We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative. However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses. Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly.


Assuntos
Inteligência Artificial , Relações Interpessoais , Humanos , Comunicação , Idioma , Emoções
4.
Proc Natl Acad Sci U S A ; 120(11): e2208839120, 2023 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-36881628

RESUMO

Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as "more human than human." We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition.


Assuntos
Heurística , Idioma , Humanos , Comunicação , Colina O-Acetiltransferase , Inteligência Artificial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...