Misinformation—defined here as false or misleading information presented as accurate, regardless of intent—has the capacity to negatively impact people’s beliefs and behaviors [1]. Behaviors driven or justified in part by misinformation, such as vaccination refusal, can not only detrimentally impact an individual (e.g., increasing the risk of contracting potentially life-threatening illnesses), but also others (e.g., increasing the potential spread of viruses to vulnerable populations) and society more broadly (e.g., placing increased pressure on the health-care system [2]). Due to the threats posed by misinformation, developing and implementing interventions that reduce people’s susceptibility to misinformation has become a key focus of both research and policy [1, 3, 4].
To date, several psychologically-based misinformation interventions have received sound empirical support (for a synthesis of currently recommended interventions, see Kozyreva et al. [5]). For example, providing corrective information that directly counters a piece of misinformation (i.e., “debunking”) can significantly and meaningfully reduce belief in the targeted false information [6]. However, although targeted interventions can be highly effective, the advent of social media has made detecting and directly counteracting misinformation increasingly difficult, and in many cases impossible [7]. As such, in recent years there has been an increased focus on developing and implementing generalized misinformation interventions that are easily scalable to social-media environments [5, 8, 9]. Many of these interventions are based on nudge theory, which posits that small changes in choice architecture can meaningfully impact decision-making processes [10]. In the realm of misinformation, nudge-based interventions typically attempt to reduce misinformation sharing that may occur due to inattentiveness to information veracity [11] by priming people to consider (1) the accuracy of encountered information (i.e., accuracy nudges [12, 13]) or (2) the attitudes or behaviors of others as they pertain to misinformation sharing (i.e., social-norm nudges [14, 15]). Nudge-based misinformation interventions are proposed to be effective because they draw people’s attention to the importance of veracity, subsequently increasing the weight placed on veracity as a criterion during decision-making processes (in line with the limited-attention utility model [11]).
In experimental settings, nudge-based misinformation interventions have generally shown to have a beneficial, though small, impact on engagement behavior through either directly reducing intent to share false information or improving “sharing discernment” (i.e., increasing the proportion of true relative to false information participants report they would share [8, 12, 14–16]). However, despite these positive findings, studies assessing the effectiveness of nudge-based misinformation interventions do not always appropriately consider the structure of the social-media information environment. Specifically, (1) the proportion of false information is often artificially high (e.g., 50% of the claims presented), (2) participants are often exposed to only verifiable (i.e., objectively true or false) information, often in the form of news headlines, and (3) participants are often required to actively appraise whether or not they would engage with each item (e.g., headline). In contrast, the quantity of misinformation people are exposed to on real-world social-media platforms is typically small compared to the amount of true or non-verifiable (e.g., personal or opinion-based) information [17]. Additionally, the volume of information people are exposed to on social media exceeds what they are able or inclined to critically appraise [11, 18].
Research outside of the field of misinformation has consistently shown that nudges can be highly context-specific and quick to decay [10, 19]. Therefore, it is unclear whether the effect of nudge-based interventions observed in typical experimental settings would be observed in more realistic information environments. In fact, exploratory analyses by Roozenbeek et al. [16] suggest accuracy nudges may only be effective on the few posts immediately succeeding the nudge. If this is the case, nudge-based misinformation interventions may be less effective than believed, particularly when misinformation makes up a small proportion of content in the information environment. As the implementation of interventions, especially those which shift responsibility to the individual consumer, potentially have detrimental practical consequences if ineffective [20], the question of whether the effects of nudge-based interventions translate to more realistic information environments requires direct investigation.
We are aware of one recent study that assessed the efficacy of an accuracy-nudge intervention in a setting which included non-news-based posts in addition to true and false headlines [21]. Participants were presented with 72 posts, of which 48 were social posts (50% political, 50% apolitical), 12 were true headlines, and 12 were false headlines. The researchers found accuracy prompts led to a small improvement in sharing discernment, however, no improvement in liking discernment (in fact, liking of false posts was numerically greater than control in all accuracy-nudge conditions, in some cases statistically significantly so). Additionally, the accuracy nudge neither significantly reduced engagement with (or sharing of) false information, nor significantly increased engagement with (or sharing of) true information compared to the control condition. This suggests that nudging may have only limited positive impact on sharing discernment in environments with lower volumes of misinformation. However, in this study the proportion of each information type (i.e., true, false, and social) was kept constant across all conditions. As such, it is unclear whether the effectiveness of nudge interventions varies depending on the proportion of misinformation relative to true and non-news information.
The overarching aim of the current study was to assess the effectiveness of a scalable nudge-based intervention (specifically, a combined accuracy-prompt and social-norm intervention) in environments with varied proportions of misinformation. Across conditions, false headlines made up either 50% (40 false headlines, 40 true headlines), 20% (10 false headlines, 40 true headlines), or 12.5% (10 false headlines, 40 true headlines, 30 social posts) of total posts. To increase external validity, posts were presented in a mock feed using a realistic social-media simulator [22], and participants were not required to actively appraise all posts within the study (i.e., as with real social-media platforms, participants had the capacity to scroll past posts without reading them).
We specified the following pre-registered hypotheses: It was hypothesized that (1) there would be a main effect of the nudge intervention, such that the nudge would significantly improve engagement discernment. Further, and central to the current research question, it was hypothesized that (2) the effectiveness of the nudge intervention would depend on the misinformation proportion. As this is the first time this question has been empirically assessed, we did not explicitly pre-register a directional hypothesis. However, due to the context-dependent nature of nudges, and prior research suggesting the effectiveness of nudge-based misinformation interventions may decay quickly [16], if the effectiveness of the nudge intervention does differ across conditions with varying proportions of misinformation it was predicted that the nudge intervention would be significantly more effective when the proportion of false headlines was high (i.e., 50%) than in the conditions with a lower proportion of false headlines (i.e., 20% and 12.5%).