Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 17127, 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39054335

ABSTRACT

People increasingly use large language model (LLM)-based conversational agents to obtain information. However, the information these models provide is not always factually accurate. Thus, it is critical to understand what helps users adequately assess the credibility of the provided information. Here, we report the results of two preregistered experiments in which participants rated the credibility of accurate versus partially inaccurate information ostensibly provided by a dynamic text-based LLM-powered agent, a voice-based agent, or a static text-based online encyclopedia. We found that people were better at detecting inaccuracies when identical information was provided as static text compared to both types of conversational agents, regardless of whether information search applications were branded (ChatGPT, Alexa, and Wikipedia) or unbranded. Mediation analysis overall corroborated the interpretation that a conversational nature poses a threat to adequate credibility judgments. Our research highlights the importance of presentation mode when dealing with misinformation.


Subject(s)
Communication , Judgment , Humans , Male , Female , Adult , Language , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...