ABSTRACT
Corrected misinformation can continue to influence inferential reasoning. It has been suggested that such continued influence is partially driven by misinformation familiarity, and that corrections should therefore avoid repeating misinformation to avoid inadvertent strengthening of misconceptions. However, evidence for such familiarity-backfire effects is scarce. We tested whether familiarity backfire may occur if corrections are processed under cognitive load. Although misinformation repetition may boost familiarity, load may impede integration of the correction, reducing its effectiveness and therefore allowing a backfire effect to emerge. Participants listened to corrections that repeated misinformation while in a driving simulator. Misinformation familiarity was manipulated through the number of corrections. Load was manipulated through a math task administered selectively during correction encoding. Multiple corrections were more effective than a single correction; cognitive load reduced correction effectiveness, with a single correction entirely ineffective under load. This provides further evidence against familiarity-backfire effects and has implications for real-world debunking.
ABSTRACT
Misinformed beliefs are difficult to change. Refutations that target false claims typically reduce false beliefs, but tend to be only partially effective. In this study, a social norming approach was explored to test whether provision of peer norms could provide an alternative or complementary approach to refutation. Three experiments investigated whether a descriptive norm-by itself or in combination with a refutation-could reduce the endorsement of worldview-congruent claims. Experiment 1 found that using a single-point estimate to communicate a norm affected belief but had less impact than a refutation. Experiment 2 used a verbally presented distribution of four values to communicate a norm, which was largely ineffective. Experiment 3 used a graphically presented social norm with 25 values, which was found to be as effective at reducing claim belief as a refutation, with the combination of both interventions being most impactful. These results provide a proof of concept that normative information can aid in the debunking of false or equivocal claims, and suggests that theories of misinformation processing should take social factors into account.
Subject(s)
Communication , Social Norms , HumansABSTRACT
Misinformation regarding the cause of an event often continues to influence an individual's event-related reasoning, even after they have received a retraction. This is known as the continued influence effect (CIE). Dominant theoretical models of the CIE have suggested the effect arises primarily from failures to retrieve the correction. However, recent research has implicated information integration and memory updating processes in the CIE. As a behavioural test of integration, we applied an event segmentation approach to the CIE paradigm. Event segmentation theory suggests that incoming information is parsed into distinct events separated by event boundaries, which can have implications for memory. As such, when an individual encodes an event report that contains a retraction, the presence of event boundaries should impair retraction integration and memory updating, resulting in an enhanced CIE. Experiments 1 and 2 employed spatial event segmentation boundaries in an attempt to manipulate the ease with which a retraction can be integrated into a participant's mental event model. While Experiment 1 showed no impact of an event boundary, Experiment 2 yielded evidence that an event boundary resulted in a reduced CIE. To the extent that this finding reflects enhanced retrieval of the retraction relative to the misinformation, it is more in line with retrieval accounts of the CIE.
Subject(s)
Communication , Memory , Humans , Mental RecallABSTRACT
Given that being misinformed can have negative ramifications, finding optimal corrective techniques has become a key focus of research. In recent years, several divergent correction formats have been proposed as superior based on distinct theoretical frameworks. However, these correction formats have not been compared in controlled settings, so the suggested superiority of each format remains speculative. Across four experiments, the current paper investigated how altering the format of corrections influences people's subsequent reliance on misinformation. We examined whether myth-first, fact-first, fact-only, or myth-only correction formats were most effective, using a range of different materials and participant pools. Experiments 1 and 2 focused on climate change misconceptions; participants were Qualtrics online panel members and students taking part in a massive open online course, respectively. Experiments 3 and 4 used misconceptions from a diverse set of topics, with Amazon Mechanical Turk crowdworkers and university student participants. We found that the impact of a correction on beliefs and inferential reasoning was largely independent of the specific format used. The clearest evidence for any potential relative superiority emerged in Experiment 4, which found that the myth-first format was more effective at myth correction than the fact-first format after a delayed retention interval. However, in general it appeared that as long as the key ingredients of a correction were presented, format did not make a considerable difference. This suggests that simply providing corrective information, regardless of format, is far more important than how the correction is presented.