Misinformation – which can refer to fabricated news stories, false rumors, conspiracy theories, or coordinated disinformation campaigns – is a serious threat to society and democracy1,2. It can undermine trust in fair elections3, reduce support for climate change4, and increase vaccine hesitancy5. Thus, there has been a growing interest in understanding the psychology of belief in misinformation and how to mitigate its spread1,2,6,7.
There is a substantial partisan divide in how people judge information to be true or false. People are much more likely to believe news with politically-congruent content8–11 or news that comes from politically-congruent sources12. Additionally, when asked for the first association that comes to mind when they hear the term “fake news,” US Republicans tend to say “CNN” and US Democrats tend to say “Fox News,”13 revealing that even the definition of fake news seems to be influenced by partisan affinities.
However, it is unclear why this partisan divide in belief exists. One potential explanation for this finding is that people tend to engage in politically-motivated cognition14,15. In other words, while people are motivated to be accurate, they also have social goals (e.g., group belonging, status, etc.) for holding certain beliefs that can interfere with accuracy goals8. Another potential explanation is that partisans simply have different pre-existing knowledge, or different prior beliefs, as a result of exposure to different partisan news outlets and social media feeds6. Given that partisans are exposed to different information, it is challenging to differentiate between these two explanations unless accuracy or social motivations are experimentally manipulated16–18.
Several studies have also found that US conservatives and Republicans tend to believe in and share far more misinformation than US liberals19–24, and a similar pattern appears to exist in many other countries25. This finding presents another puzzle: one interpretation behind this asymmetry is that conservatives are exposed to more low-quality information and thus have less accurate political knowledge, perhaps due to conservative politicians and news media sources sharing less accurate information. For instance, one study estimated that former US President Trump was the largest source of coronavirus misinformation during early stages of the pandemic26. Another interpretation again focuses on motivation, suggesting that conservatives may, in some contexts, have greater motivations to believe ideologically or identity-consistent claims that could interfere with their motivation to be accurate27,28. Yet, again, it is difficult to disentangle the causal role of motivation versus prior knowledge without experimentally manipulating motivations.
To address these questions, we examine the causal role of accuracy motivations in shaping judgements of true and false political news via the provision of financial incentives for accurate responses. Prior research about the effect of financial incentives for accuracy have yielded mixed results, leaving many open questions. For example, previous studies have found that financial incentives to be accurate can reduce partisan bias about politicized issues29,30 and headlines31, or improve accuracy about scientific information32. However, another study found that incentives for accuracy can backfire, slightly increasing belief in fake news9. Incentives also do not eliminate people’s tendency to view familiar statements33 or positions for which they advocate34 as more accurate, raising questions as to whether incentives can override the heuristics people use to judge truth35. These conflicting results motivate the need for a systematic investigation of when and for whom accuracy motivations influence belief.
We also examined whether social motivations to identify posts that will be liked by one’s in-group interfere with accuracy motivations. On social media, content that fulfills social-identity motivations, such as expressions of out-group derogation, tends to receive higher engagement36. False news stories may be good at fulfilling these social motivations, as false content is often negative about outgroup members19,37. The incentive structure of the social media environment draws attention to social motivations (e.g., receiving social approval in the form of likes and shares), which may lead people to give less weight to accuracy motivations online38,39.
We also compare the effect of accuracy motivations to the effects of other factors known to be associated with truth discernment. For instance, one account of fake news sharing suggests that people are “lazy, not biased” and that factors such as analytic thinking40 and inattention41 matter more than motivated reasoning and partisanship. Other work has identified political knowledge42, media literacy skills43, and affective polarization37 as important predictors of the belief in and sharing of news. We measure and compare the relative importance of each of these factors in explaining fake news belief2.
Overview
Across three pre-registered experiments, including a replication with a nationally representative US sample, we test whether (A) incentives to be accurate improve people’s ability to discern between true and false news and (B) reduce partisan bias (Experiment 1). Additionally, we test whether (C) social incentives to identify posts that appeal to one’s in-group (mirroring the incentives of social media) reduce accuracy, even if paired with accuracy incentives (Experiment 2). Further, (D) to test a key psychological process underlying our results, we examine whether the effects of incentives dissipate when partisan sources cues are removed from posts (Experiment 3). Finally, in an integrative data analysis, we conducted a high-powered test to (E) see if these effects are moderated by political ideology, (F) examine whether motivation helps explain the gap in accuracy between conservatives and liberals, and (G) compare the effects of motivation to the effects of other variables known to predict misinformation susceptibility.
Experiment 1: Incentives Improve Accuracy and Reduce Bias
In Experiment 1, we recruited a politically-balanced sample of 462 US participants (194 M, 255 F, 12 Trans/Nonbinary; age: M = 35.85, SD = 13.66; Politics: 253 Democrats, 201 Republicans) via the survey platform Prolific Academic.44 Participants were shown 16 pre-tested news headlines with an accompanying picture and source (similar to how a news article preview would show up on someone’s Facebook feed). Eight headlines (four false and four true) were rated as more accurate by Democrats than Republicans in a pre-test, and eight headlines (four false and four true) were rated as more accurate by Republicans than Democrats in a pre-test. After seeing each headline, participants were asked “To the best of your knowledge, is the claim in the above headline accurate?” and were then asked “If you were to see the above article on social media, how likely would you be to share it?” See Methods for more details.
Half of the participants were randomly assigned to the accuracy incentives condition. In this condition, participants were told they would receive a small bonus payment of up to one dollar based on how many correct answers they could provide regarding the accuracy of the articles. The other half of participants were assigned to a control condition in which they were asked the same questions about accuracy and sharing without any incentive to be accurate.
We first examined whether accuracy incentives improved truth discernment, or the number of true headlines participants rated as true minus the number of false headlines participants rated as true10. As predicted, participants in the accuracy incentives condition (M = 3.01, 95% CI = [2.68, 3.34]) were better at discerning truth than those in the control condition (M = 2.43, 95% CI = [2.12, 2.73]), t(457.64) = 2.58, p = 0.010, d = 0.24. In other words, participants answered 11.01 (out of 16) questions correctly in the accuracy incentives condition, as opposed to 10.43 (out of 16) questions in the control condition.
We next examined whether incentives decreased partisan bias, or the number of politically-congruent headlines participants rated as true minus the number of politically-incongruent headlines participants rated as true. As predicted, partisan bias in accuracy judgements was 31% smaller in the accuracy incentives condition (M = 1.31, 95% CI = [1.04, 1.58]) as compared to the control condition (M = 1.91, 95% CI = [1.62, 2.19]), t(495.8) = 3.01, p = 0.001, d = 0.28.
Additional analysis (See Supplementary Materials S1 for extended results) found that accuracy incentives increased the percentage of politically-incongruent headlines rated as true (M = 51.53, 95% CI = [47.36, 55.70]) as compared to the control condition (M = 38.25, 95% CI = [34.41, 42.08]), p < 0.001, d = 0.43. When controlling for multiple comparisons with Tukey HSD post-hoc tests, incentives had no effect on politically-incongruent false news (p = 0.444), politically-congruent false news (p = 0.999), or politically-congruent true news (p = 0.472). In other words, the effect of the incentives was driven by an increase in belief in news from the other side (e.g., Republicans saying news from “The New York Times” is more accurate, or Democrats saying news from “Fox News” is more accurate). Results from all three studies are plotted visually in Fig 1.
We also examined whether the incentive influenced sharing discernment, or the number of true headlines shared minus the number of false headlines shared. Interestingly, even though better sharing behavior was not incentivized, sharing discernment was slightly higher in the accuracy incentive condition (M = 0.38, 95% CI = [0.28, 0.48]) as compared to the control condition (M = 0.22, 95% CI = [0.15, 0.30]), t(424.8) = 0.01, p = 0.037, d = 0.23.
Experiment 2: Social Motivations Interfere with Accuracy Motivations
In Experiment 2, we aimed to replicate and extend on the results of Experiment 1 by examining whether social motivations to identify articles that would be liked by one’s in-group (like those present on social media) might interfere with accuracy motives. We recruited another politically-balanced sample of 998 participants (463 M, 505 F, 30 transgender/non-binary/other; age: M = 36.17, SD = 13.94; politics: 568 liberals, 430 conservatives) via Prolific Academic. In addition to the accuracy incentives and control condition, we added a social incentives condition, whereby participants were given a financial incentive to correctly identify articles that would appeal to members of their own political party. Specifically, participants were told that they would receive a bonus payment of up to one dollar based on how accurately they identified articles that would be liked by members of their political party if they shared them on social media. Immediately after answering this question, participants were then asked about the accuracy of the article and how likely they would be to share it. In a final condition, called the mixed incentives condition, participants received a financial incentive of up to one dollar to identify articles that would be liked by one’s in-group, followed by an additional financial incentive to identify accurate articles – in other words, people had mixed motivations.
We first examined how these motivations influenced truth discernment. Replicating the results of Experiment 1, there was a significant main effect of the accuracy incentives on truth discernment, F(1, 994) = 29.14, p < 0.001, η2G = 0.03, a significant main effect of the social incentives on truth discernment, F(1, 994) = 7.53, p < 0.006, η2G = 0.01, but no interaction between the accuracy and social incentives (p = 0.237). Tukey HSD post-hoc tests indicated that truth discernment was higher in the accuracy incentives condition (M = 3.00, 95% CI = [2.69, 3.32]) compared to the control condition (M = 2.01, 95% = [1.74, 3.30]), p < 0.001, d = 0.41. Truth discernment was also higher in the accuracy incentives condition compared to the social incentives condition (M = 1.78, 95% CI = [1.49, 2.07]), p < 0.001, d = 0.50, and the mixed incentives condition (M = 2.42, 95% CI = [2.11, 2.71], p < 0.029, d = 0.27. However, the mixed incentives condition did not differ from the control condition (p = 0.676), indicating that the social motivations may have interfered with accuracy concerns. Taken together, these results suggest that accuracy motivations increase truth discernment, but social motives can override accuracy motives.
We then examined how these motives influenced partisan bias. Replicating the results from Experiment 1, there was a significant main effect of accuracy incentives on partisan bias, F(1, 994) = 9.01, p = 0.003, η2G = 0.01, but no effect of the social incentives, F(1, 994) = 0.60, p = 0.441, η2G = 0.00, or the interaction between accuracy and social incentives, F(1, 994) = 0.27, p = 0.606, η2G = 0.00. Post-hoc tests indicated that there was a marginal difference in partisan bias between the accuracy incentives condition (M = 1.26, 95% CI = [1.01, 1.51]) and the control condition (M = 1.72, 95% CI = [1.47, 1.98]), p = 0.062, d = 0.23 – a 27% decrease in partisan bias. There was also a significant difference between the accuracy incentives condition and the social incentives condition (M = 1.76, 95% CI = [1.48, 2.03]), p = 0.04, d = 0.24. No other post-hoc tests yielded significant differences (ps > 0.182).
Follow-up analysis (Supplementary Materials S1) once again indicated that the incentives primarily impacted the percentage of incongruent true articles rated as accurate (M = 55.61%, 95% CI = [51.68, 59.54]) when compared to the control condition (M = 37.65%, 95% CI = [33.83, 41.46]), p < 0.001, d = 0.58 (Fig. 1). However, the percent of incongruent true articles rated as accurate did not differ in the mixed incentives condition (M = 46.07%, 95% CI = [42.04, 51.10]) compared to the control condition (p = 0.092), suggesting once again that social motives compete with accuracy motives. The incentives again did not impact belief in congruent true news, incongruent false news, or congruent false news (ps > 0.148).
Incentives did not improve sharing discernment (p = 0.996), diverging from the results of Study 1. However, follow-up analysis (Supplementary Materials S1) indicated that those in the social incentives condition shared more politically-congruent news (either true or false) (M = 1.98, 95% CI = [1.90, 2.05]) as compared to the control condition (M = 1.80, 95% CI = [1.74, 1.87]), p = 0.015, d = 0.21. Additionally, those in the mixed incentives condition (M = 2.02, 95% CI = [1.94, 2.10]) shared more politically-congruent news (true or false) as compared to the control condition, p < 0.001, d = 0.26. Thus, prompting participants to think about whether an article will be liked by their party – whether or not they are also incentivized to be accurate – appears to indiscriminately increase sharing of both true and false news that appeals to one’s own political party.
Experiment 3: The Effect of Accuracy Incentives Depends on Source Cues
In Experiment 3, we aimed to replicate our prior findings in a nationally representative sample in the United States and test a potential process behind the effects of accuracy incentives. Specifically, we recruited a sample of 921 participants that was quota-matched to the national distribution on age, gender, ethnicity, and political party via Prolific Academic (439 M, 470 F, 12 transgender/non-binary/other; age: M = 40.07, SD = 14.67; politics: 542 liberals, 379 conservatives). Since prior work has found strong effects of source cues12 and party cues45 more broadly, we suspected that people were responding to source cues when making judgements about news. Since true news often contains more recognizable sources with partisan connotations (e.g. “nytimes.com” as opposed to the fake news website “yournewswire.com”)46, this may explain why incentives only impacted belief in true news. Accuracy incentives may have caused people to override a judgement based on a partisan source cue and think more deeply about the headline’s veracity. To test this, we examined the effect of incentives with and without source cues (e.g., a URL name such as “foxnews.com”) present beside the headlines (see Methods for more details). This study had four conditions: an accuracy incentives condition and control condition (with sources), and an accuracy incentive and control condition (without sources).
Replicating the results from Experiments 1 and 2, accuracy incentives significantly improved truth discernment, F(1, 917) = 4.44, p = 0.04, η2G = 0.01. Additionally, there was a significant impact of source cues on truth discernment such that source cues improved accuracy, F(1, 917) = 8.88, p = 0.003, η2G = 0.01. However, there was no significant interaction between the accuracy incentive condition and source cues (p = 0.284). Post-hoc tests revealed that truth discernment was marginally higher in the accuracy incentive (with sources) condition (M = 2.76, 95% CI = [2.44, 3.09]) compared to the control (with sources) condition (M = 2.30, 95% CI = [2.00, 2.58]), p = 0.110, d = 0.20. Without sources, truth discernment was not higher in the accuracy incentives condition (M = 2.15, 95% CI = [1.86, 2.45]) compared to the control condition (without sources) (M = 2.00, 95% CI = [1.73, 2.26]), p = 0.888, d = 0.07.
Also replicating Experiments 1 and 2, accuracy incentives reduced partisan bias, F(1, 917) = 18.21, p < 0.001, η2G = 0.02. Source cues did not impact partisan bias (p = 0.931), and there was no interaction between source cues and partisan bias (p = 0.923). Post-hoc tests found that partisan bias was 33% lower in the accuracy incentives condition (with sources) (M = 1.17, 95% CI = [0.90, 1.45]) compared to the control condition (with sources) (M = 1.75, 95% CI = [1.48, 2.01]), p = 0.017, d = 0.29. Interestingly, without sources present beside the headlines, partisan bias was also lower in the accuracy incentives condition (M = 1.15, 95% CI = [0.87, 1.42] compared to the control condition (M = 1.75, 95% CI = [1.48, 2.01]), p = 0.012, d = 0.27. Thus, even without sources present beside the headlines, people may have tried to increase their accuracy by responding to the content of the headlines in a less biased manner.
Like in Experiment 2, there was once again no significant impact of accuracy incentives on sharing discernment (p = 0.906). However, there was an effect of the source cues on sharing discernment such that source cues improved sharing discernment, F(1, 917) = 4.92, p = 0.027, η2 = 0.01. There was also no interaction between accuracy incentives and source cues on sharing discernment (p = 0.124).
Replicating the results of Experiments 1 and 2, follow-up analysis (Supplementary Materials S1) found a difference in the percentage of incongruent true headlines rated as accurate in the accuracy incentive (with sources) condition (M = 51.20, 95% CI = [47.28, 55.12]) versus the control (with sources) condition (M = 39.47, 95% CI = [35.69, 43.34]), p < 0.001, d = 0.39. However, without sources present beside the headlines, there was no difference in the percentage of incongruent true headlines rated as accurate between the accuracy and control condition (p = 0.605). Importantly, there was a significant interaction between source cues and accuracy incentives on the percentage of headlines rated as accurate, F(1, 917) = 4.71, p = 0.030, η2 = 0.00. Together, these results indicate that the effects of accuracy incentives at least partially depend on the partisan-leaning source cues present on posts.
Incentives Had Larger Effects for Conservatives
We pooled data from all three studies to conduct an integrative data analysis (IDA)47 to have more power to test for a number of potential moderators. For the IDA, we only used the 16 news headlines that were used in all three studies, and only included the accuracy and control conditions that were used in all three studies, leaving a final sample size of 1,428. Political ideology (liberal vs. conservative) was a significant moderator of truth discernment, p = 0.039, partisan bias, p = 0.026, and incongruent true news, p = 0.019, such that the effects of incentives were considerably larger for conservatives than liberals. The mean effect sizes for liberals and conservatives are shown separately in Table 1. The effect of the incentives on truth discernment was not moderated by cognitive reflection, political knowledge, or affective polarization (ps < 0.327), which conflicts with prior studies suggesting that more politically-knowledgeable30 or reflective48 people might be more susceptible to motivated reasoning.
Table 1: Mean Effects Sizes for All Participants and Separately for Republicans and Democrats
Variable
|
Cohen's D
|
95% CI
|
p
|
All Participants (n = 1,428)
|
Truth Discernment
|
0.29
|
[0.18, 0.39]
|
< 0.001
|
Partisan Bias
|
0.26
|
[0.16, 0.37]
|
< 0.001
|
Incongruent True News
|
0.47
|
[0.37, 0.58]
|
< .001
|
Liberals (n = 822)
|
Truth Discernment
|
0.20
|
[0.06, 0.34]
|
0.005
|
Partisan Bias
|
0.19
|
[0.05, 0.32]
|
0.008
|
Incongruent True News
|
0.38
|
[0.24, 0.52]
|
< 0.001
|
Conservatives (n = 606)
|
Truth Discernment
|
0.45
|
[0.28, 0.61]
|
< 0.001
|
Partisan Bias
|
0.36
|
[0.52, 0.20]
|
< 0.001
|
Incongruent True News
|
0.62
|
[0.45, 0.78]
|
< 0.001
|
Note. Effects of the incentives shown for all participants and shown separately for liberals and conservatives. Political orientation moderated the effect of the incentives on truth discernment, partisan bias, and incongruent true news such that the effect was larger for conservatives than liberals.
Incentives Closed the Gap in Accuracy Between Liberals and Conservatives
Replicating prior work19–24, conservatives were worse at truth discernment. Conservatives answered about 9.31 (out of 16) questions correctly when not incentivized to be accurate and liberals answered 10.93 questions out of 16 correctly when unincentivized – a 1.62-point difference, 95% CI = [1.93, 1.30], t(686.84) = 10.09, p < .001, d = 0.75. But, when conservatives were incentivized to be accurate, they answered 10.29 questions correctly—making the gap between incentivized conservatives and unincentivized liberals a mere 0.64 points, 95% CI [0.29, 0.98], t(606.48) = 3.63, p < .001, d = 0.28. In other words, simply paying conservatives less than a dollar closed the gap in performance between conservatives and (unincentivized) liberals by 60%. This suggests that a substantial amount of US conservatives’ poorer truth discernment may be due to lack of motivation more than lack of knowledge or ability.
Conservatives also showed more partisan bias than liberals: partisan bias was 2.55 points for unincentivized conservatives and 1.22 points for unincentivized liberals – a 1.33 point difference, 95% CI = [1.03, 1.63], t(579.27) = 8.63, p < .001, d = 0.66. Yet, this difference became 0.52 points when conservatives were incentivized to be accurate, 95% CI = [0.21, 0.83], t(539.95) = 3.29, p = 0.001, d = 0.26. In other words, while conservatives initially expressed more partisan bias, incentives for accuracy closed this gap in partisan bias by 61% (Fig. 2).
Relative Importance of Accuracy Incentives
In each experiment, we measured other variables known to be predictive of truth discernment, such as cognitive reflection, political knowledge, partisan animosity, as well as demographic variables, such as age, education, and gender. We ran a multiple regression analysis on our IDA with all of these variables included in the model (Fig. 3, Panel A). To compare the relative importance of each of these predictors, we also ran a relative importance analysis using the “lmg” method49, which calculates the relative contribution of each predictor to the R2 (Fig. 3, Panel B). Full models and relative importance analyses are in Supplementary Materials S3 and S4.
Accuracy incentives remain an important predictor of truth discernment, B = 0.70, 95% CI = [0.47, 0.93], p < 0.001, partisan bias, B = -0.54, 95% = [-0.74, 0.34], p < 0.001, and belief in incongruent true news, B = 0.58, 95% CI = [0.46, 0.71], p < 0.001, even when controlling for all other relevant variables. While confidence intervals were large and overlapping for the relative importance analysis (See Supplementary Appendix S4), political conservatism was the most important variable in predicting truth discernment and partisan bias, whereas the accuracy incentive was the most important predictor of the number of incongruent true news rated as accurate. These results stand in contrast to accounts of fake news belief that downplay the roles of motivation and partisanship.6,41