Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Mem Cognit ; 51(2): 422-436, 2023 02.
Article in English | MEDLINE | ID: mdl-36125658

ABSTRACT

Misinformation often has a continuing influence on event-related reasoning even when it is clearly and credibly corrected; this is referred to as the continued influence effect. The present work investigated whether a correction's effectiveness can be improved by explaining the origins of the misinformation. In two experiments, we examined whether a correction that explained misinformation as originating either from intentional deception or an unintentional error was more effective than a correction that only identified the misinformation as false. Experiment 2 found no evidence that corrections explaining the reason the misinformation was presented, were more effective than a correction not accompanied by an explanation, and no evidence of a difference in effectiveness between a correction that explained the misinformation as intentional deception and one that explained it as unintentional error. We replicated this in Experiment 2 and found substantial attenuation of the continued influence effect in a novel scenario with the same underlying structure. Overall, the results suggest that informing people of the cause leading to presentation of misinformation, whether deliberate or accidental, may not be an effective correction strategy over and above stating that the misinformation is false.


Subject(s)
Cognition , Communication , Humans , Disinformation
2.
J Exp Psychol Learn Mem Cogn ; 49(2): 284-300, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36006725

ABSTRACT

The samples of evidence we use to make inferences in everyday and formal settings are often subject to selection biases. Two property induction experiments examined group and individual sensitivity to one type of selection bias: sampling frames - causal constraints that only allow certain types of instances to be sampled. Group data from both experiments indicated that people were sensitive to the effects of such frames, showing narrower generalization when sample instances were selected because they shared a target property (property sampling) than when instances were sampled because they belonged to a particular group (category sampling). Group generalization patterns conformed to the predictions of a Bayesian model of property induction that incorporates a selective sampling mechanism. In each experiment, however, there was considerable individual variation, with a nontrivial minority showing little sensitivity to sampling frames. Experiment 2 examined correlates of frames sensitivity. A composite measure of working memory capacity predicted individual sensitivity to sampling frames. These results have important implications for current debates about people's ability to factor sample selection mechanisms into their inferences and for the development of formal models of inductive inference. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Generalization, Psychological , Problem Solving , Humans , Selection Bias , Bayes Theorem , Memory, Short-Term
3.
J Exp Psychol Learn Mem Cogn ; 49(9): 1419-1438, 2023 Sep.
Article in English | MEDLINE | ID: mdl-36048051

ABSTRACT

In describing how people generalize from observed samples of data to novel cases, theories of inductive inference have emphasized the learner's reliance on the contents of the sample. More recently, a growing body of literature suggests that different assumptions about how a data sample was generated can lead the learner to draw qualitatively distinct inferences on the basis of the same observations. Yet, relatively little is known about how and when these two sources of evidence are combined. Do sampling assumptions affect how the sample contents are encoded, or is any influence exerted only at the point of retrieval when a decision is to be made? We report two experiments aimed at exploring this issue. By systematically varying both the sampling cover story and whether it is given before or after the training stimuli we are able to determine whether encoding or retrieval issues drive the impact of sampling assumptions. We find that the sampling cover story affects generalization when it is presented before the training stimuli, but not after, which suggests that sampling assumptions are integrated during encoding. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Conditioning, Classical , Generalization, Psychological , Humans
4.
Cognition ; 223: 105023, 2022 06.
Article in English | MEDLINE | ID: mdl-35149359

ABSTRACT

Consensus between informants is a valuable cue to a claim's epistemic value, when informants' beliefs are developed independently of each other. Recent work (Yousif et al., 2019) described an illusion of consensus such that people did not generally discriminate between the epistemic warrant of true consensus, where a majority claim is supported by multiple independent sources, and false consensus arising from repetition of a single source's claim. Four experiments tested a novel account of the illusion of consensus; that it arises when people are unsure about the independence of the primary sources on which informant claims are based. When this independence relationship was ambiguous we found evidence for the illusion. However, when steps were taken to highlight the independence between data sources in the true consensus conditions, and confidence in a claim was measured against a no consensus baseline (where there was an equal number of reports supporting and opposing a claim), more weight was given to claims based on true consensus than false consensus. These findings show that although the illusion of consensus is prevalent, people do have the capacity to distinguish between true and false consensus.


Subject(s)
Illusions , Consensus , Humans , Judgment , Uncertainty
5.
Cognition ; 205: 104453, 2020 12.
Article in English | MEDLINE | ID: mdl-33011527

ABSTRACT

Misinformation has become an increasingly topical field of research. Studies on the 'Continued Influence Effect' (CIE) show that misinformation continues to influence reasoning despite subsequent retraction. Current explanatory theories of the CIE tacitly assume continued reliance on misinformation is the consequence of a biased process. In the present work, we show why this perspective may be erroneous. Using a Bayesian formalism, we conceptualize the CIE as a scenario involving contradictory testimonies and incorporate the previously overlooked factors of the temporal dependence (misinformation precedes its retraction) between, and the perceived reliability of, misinforming and retracting sources. When considering such factors, we show the CIE to have normative backing. We demonstrate that, on aggregate, lay reasoners (N = 101) intuitively endorse the necessary assumptions that demarcate CIE as a rational process, still exhibit the standard effect, and appropriately penalize the reliability of contradicting sources. Individual-level analyses revealed that although many participants endorsed assumptions for a rational CIE, very few were able to execute the complex model update that the Bayesian model entails. In sum, we provide a novel illustration of the pervasive influence of misinformation as the consequence of a rational process.


Subject(s)
Communication , Problem Solving , Bayes Theorem , Humans , Reproducibility of Results
6.
Behav Res Methods ; 51(3): 1426-1440, 2019 06.
Article in English | MEDLINE | ID: mdl-29943224

ABSTRACT

Open-ended questions, in which participants write or type their responses, are used in many areas of the behavioral sciences. Although effective in the lab, they are relatively untested in online experiments, and the quality of responses is largely unexplored. Closed-ended questions are easier to use online because they generally require only single key- or mouse-press responses and are less cognitively demanding, but they can bias the responses. We compared the data quality obtained using open and closed response formats using the continued-influence effect (CIE), in which participants read a series of statements about an unfolding event, one of which is unambiguously corrected later. Participants typically continue to refer to the corrected misinformation when making inferential statements about the event. We implemented this basic procedure online (Exp. 1A, n = 78), comparing standard open-ended responses to an alternative procedure using closed-ended responses (Exp. 1B, n = 75). Finally, we replicated these findings in a larger preregistered study (Exps. 2A and 2B, n = 323). We observed the CIE in all conditions: Participants continued to refer to the misinformation following a correction, and their references to the target misinformation were broadly similar in number across open- and closed-ended questions. We found that participants' open-ended responses were relatively detailed (including an average of 75 characters for inference questions), and almost all responses attempted to address the question. The responses were faster, however, for closed-ended questions. Overall, we suggest that with caution it may be possible to use either method for gathering CIE data.


Subject(s)
Internet , Adult , Bias , Communication , Female , Humans , Male , Middle Aged , Writing , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...