Accuracy and Social Incentives Shape Belief in (Mis)Information

Liberals and conservatives are divided in their judgements about the accuracy of true and false news. Yet it is unclear whether this partisan divide reects genuine differences in knowledge, or whether it can be overcome if people are motivated to be accurate. Across three experiments (n = 2,381), we motivated participants to be accurate by giving them a small nancial incentive to provide correct responses about the veracity of news headlines. This incentive improved accuracy and reduced partisan bias in belief in news headlines – especially for conservative participants. Increasing social motivations, however, decreased accuracy. Replicating prior work, conservatives were substantially less accurate than liberals at discerning true from false headlines. Yet, this gap between liberals and conservatives closed by 60% when conservatives were motivated to be accurate. Altogether, these ndings suggest that many instances of belief in (mis)information may reect a lack of motivation to be accurate instead of a simple lack of knowledge.

when conservatives were motivated to be accurate. Altogether, these ndings suggest that many instances of belief in (mis)information may re ect a lack of motivation to be accurate instead of a simple lack of knowledge.

Full Text
Misinformation -which can refer to fabricated news stories, false rumors, conspiracy theories, or coordinated disinformation campaigns -is a serious threat to society and democracy 1,2 . It can undermine trust in fair elections 3 , reduce support for climate change 4 , and increase vaccine hesitancy 5 .
Thus, there has been a growing interest in understanding the psychology of belief in misinformation and how to mitigate its spread 1,2,6,7 .
There is a substantial partisan divide in how people judge information to be true or false. People are much more likely to believe news with politically-congruent content [8][9][10][11] or news that comes from politically-congruent sources 12 . Additionally, when asked for the rst association that comes to mind when they hear the term "fake news," US Republicans tend to say "CNN" and US Democrats tend to say "Fox News," 13 revealing that even the de nition of fake news seems to be in uenced by partisan a nities.
However, it is unclear why this partisan divide in belief exists. One potential explanation for this nding is that people tend to engage in politically-motivated cognition 14,15 . In other words, while people are motivated to be accurate, they also have social goals (e.g., group belonging, status, etc.) for holding certain beliefs that can interfere with accuracy goals 8 . Another potential explanation is that partisans simply have different pre-existing knowledge, or different prior beliefs, as a result of exposure to different partisan news outlets and social media feeds 6 . Given that partisans are exposed to different information, it is challenging to differentiate between these two explanations unless accuracy or social motivations are experimentally manipulated [16][17][18] .
Several studies have also found that US conservatives and Republicans tend to believe in and share far more misinformation than US liberals [19][20][21][22][23][24] , and a similar pattern appears to exist in many other countries 25 . This nding presents another puzzle: one interpretation behind this asymmetry is that conservatives are exposed to more low-quality information and thus have less accurate political knowledge, perhaps due to conservative politicians and news media sources sharing less accurate information. For instance, one study estimated that former US President Trump was the largest source of coronavirus misinformation during early stages of the pandemic 26 . Another interpretation again focuses on motivation, suggesting that conservatives may, in some contexts, have greater motivations to believe ideologically or identity-consistent claims that could interfere with their motivation to be accurate 27,28 .
Yet, again, it is di cult to disentangle the causal role of motivation versus prior knowledge without experimentally manipulating motivations.
To address these questions, we examine the causal role of accuracy motivations in shaping judgements of true and false political news via the provision of nancial incentives for accurate responses. Prior research about the effect of nancial incentives for accuracy have yielded mixed results, leaving many open questions. For example, previous studies have found that nancial incentives to be accurate can reduce partisan bias about politicized issues 29,30 and headlines 31 , or improve accuracy about scienti c information 32 . However, another study found that incentives for accuracy can back re, slightly increasing belief in fake news 9 . Incentives also do not eliminate people's tendency to view familiar statements 33 or positions for which they advocate 34 as more accurate, raising questions as to whether incentives can override the heuristics people use to judge truth 35 . These con icting results motivate the need for a systematic investigation of when and for whom accuracy motivations in uence belief.
We also examined whether social motivations to identify posts that will be liked by one's in-group interfere with accuracy motivations. On social media, content that ful lls social-identity motivations, such as expressions of out-group derogation, tends to receive higher engagement 36 . False news stories may be good at ful lling these social motivations, as false content is often negative about outgroup members 19,37 . The incentive structure of the social media environment draws attention to social motivations (e.g., receiving social approval in the form of likes and shares), which may lead people to give less weight to accuracy motivations online 38,39 .
We also compare the effect of accuracy motivations to the effects of other factors known to be associated with truth discernment. For instance, one account of fake news sharing suggests that people are "lazy, not biased" and that factors such as analytic thinking 40 and inattention 41 matter more than motivated reasoning and partisanship. Other work has identi ed political knowledge 42 , media literacy skills 43 , and affective polarization 37 as important predictors of the belief in and sharing of news. We measure and compare the relative importance of each of these factors in explaining fake news belief 2 .
Overview Across three pre-registered experiments, including a replication with a nationally representative US sample, we test whether (A) incentives to be accurate improve people's ability to discern between true and false news and (B) reduce partisan bias (Experiment 1). Additionally, we test whether (C) social incentives to identify posts that appeal to one's in-group (mirroring the incentives of social media) reduce accuracy, even if paired with accuracy incentives (Experiment 2). Further, (D) to test a key psychological process underlying our results, we examine whether the effects of incentives dissipate when partisan sources cues are removed from posts (Experiment 3). Finally, in an integrative data analysis, we conducted a highpowered test to (E) see if these effects are moderated by political ideology, (F) examine whether motivation helps explain the gap in accuracy between conservatives and liberals, and (G) compare the effects of motivation to the effects of other variables known to predict misinformation susceptibility.  44 Participants were shown 16 pre-tested news headlines with an accompanying picture and source (similar to how a news article preview would show up on someone's Facebook feed). Eight headlines (four false and four true) were rated as more accurate by Democrats than Republicans in a pre-test, and eight headlines (four false and four true) were rated as more accurate by Republicans than Democrats in a pre-test. After seeing each headline, participants were asked "To the best of your knowledge, is the claim in the above headline accurate?" and were then asked "If you were to see the above article on social media, how likely would you be to share it?" See Methods for more details.
Half of the participants were randomly assigned to the accuracy incentives condition. In this condition, participants were told they would receive a small bonus payment of up to one dollar based on how many correct answers they could provide regarding the accuracy of the articles. The other half of participants were assigned to a control condition in which they were asked the same questions about accuracy and sharing without any incentive to be accurate.
We rst examined whether accuracy incentives improved truth discernment, or the number of true headlines participants rated as true minus the number of false headlines participants rated as true 10 . As We next examined whether incentives decreased partisan bias, or the number of politicallycongruent headlines participants rated as true minus the number of politically-incongruent headlines participants rated as true. As predicted, partisan bias in accuracy judgements was 31% smaller in the accuracy incentives condition (M = 1. 31 [34.41, 42.08]), p < 0.001, d = 0.43. When controlling for multiple comparisons with Tukey HSD post-hoc tests, incentives had no effect on politically-incongruent false news (p = 0.444), politically-congruent false news (p = 0.999), or politically-congruent true news (p = 0.472). In other words, the effect of the incentives was driven by an increase in belief in news from the other side (e.g., Republicans saying news from "The New York Times" is more accurate, or Democrats saying news from "Fox News" is more accurate). Results from all three studies are plotted visually in Fig 1. We also examined whether the incentive in uenced sharing discernment, or the number of true headlines shared minus the number of false headlines shared. Interestingly, even though better sharing behavior was not incentivized, sharing discernment was slightly higher in the accuracy incentive condition In Experiment 2, we aimed to replicate and extend on the results of Experiment 1 by examining whether social motivations to identify articles that would be liked by one's in-group (like those present on social media) might interfere with accuracy motives. We recruited another politically-balanced sample of 998 participants (463 M, 505 F, 30 transgender/non-binary/other; age: M = 36.17, SD = 13.94; politics: 568 liberals, 430 conservatives) via Proli c Academic. In addition to the accuracy incentives and control condition, we added a social incentives condition, whereby participants were given a nancial incentive to correctly identify articles that would appeal to members of their own political party. Speci cally, participants were told that they would receive a bonus payment of up to one dollar based on how accurately they identi ed articles that would be liked by members of their political party if they shared them on social media. Immediately after answering this question, participants were then asked about the accuracy of the article and how likely they would be to share it. In a nal condition, called the mixed incentives condition, participants received a nancial incentive of up to one dollar to identify articles that would be liked by one's in-group, followed by an additional nancial incentive to identify accurate articles -in other words, people had mixed motivations.
We rst examined how these motivations in uenced truth discernment. Replicating the results of Experiment 1, there was a signi cant main effect of the accuracy incentives on truth discernment, F(1, 994) = 29.14, p < 0.001, η 2 G = 0.03, a signi cant main effect of the social incentives on truth discernment, F(1, 994) = 7.53, p < 0.006, η 2 G = 0.01, but no interaction between the accuracy and social incentives (p = 0.237). Tukey HSD post-hoc tests indicated that truth discernment was higher in the accuracy incentives condition (M = 3.00, 95% CI = .71], p < 0.029, d = 0.27. However, the mixed incentives condition did not differ from the control condition (p = 0.676), indicating that the social motivations may have interfered with accuracy concerns. Taken together, these results suggest that accuracy motivations increase truth discernment, but social motives can override accuracy motives.
We then examined how these motives in uenced partisan bias. Replicating the results from Experiment 1, there was a signi cant main effect of accuracy incentives on partisan bias, F(1, 994) = 9.01, p = 0.003, η 2 G = 0.01, but no effect of the social incentives, F(1, 994) = 0.60, p = 0.441, η 2 G = 0.00, or the interaction between accuracy and social incentives, F(1, 994) = 0.27, p = 0.606, η 2 G = 0.00. Post-hoc tests indicated that there was a marginal difference in partisan bias between the accuracy incentives condition (M = 10]) shared more politically-congruent news (true or false) as compared to the control condition, p < 0.001, d = 0.26. Thus, prompting participants to think about whether an article will be liked by their party -whether or not they are also incentivized to be accurate -appears to indiscriminately increase sharing of both true and false news that appeals to one's own political party.

Experiment 3: The Effect of Accuracy Incentives Depends on Source Cues
In Experiment 3, we aimed to replicate our prior ndings in a nationally representative sample in the United States and test a potential process behind the effects of accuracy incentives. Speci cally, we recruited a sample of 921 participants that was quota-matched to the national distribution on age, gender, ethnicity, and political party via Proli c Academic (439 M, 470 F, 12 transgender/non-binary/other; age: M = 40.07, SD = 14.67; politics: 542 liberals, 379 conservatives). Since prior work has found strong effects of source cues 12 and party cues 45 more broadly, we suspected that people were responding to source cues when making judgements about news. Since true news often contains more recognizable sources with partisan connotations (e.g. "nytimes.com" as opposed to the fake news website "yournewswire.com") 46 , this may explain why incentives only impacted belief in true news. Accuracy incentives may have caused people to override a judgement based on a partisan source cue and think more deeply about the headline's veracity. To test this, we examined the effect of incentives with and without source cues (e.g., a URL name such as "foxnews.com") present beside the headlines (see Methods for more details). This study had four conditions: an accuracy incentives condition and control condition (with sources), and an accuracy incentive and control condition (without sources). However, without sources present beside the headlines, there was no difference in the percentage of incongruent true headlines rated as accurate between the accuracy and control condition (p = 0.605). Importantly, there was a signi cant interaction between source cues and accuracy incentives on the percentage of headlines rated as accurate, F(1, 917) = 4.71, p = 0.030, η 2 = 0.00. Together, these results indicate that the effects of accuracy incentives at least partially depend on the partisan-leaning source cues present on posts.

Incentives Had Larger Effects for Conservatives
We pooled data from all three studies to conduct an integrative data analysis (IDA) 47 to have more power to test for a number of potential moderators. For the IDA, we only used the 16 news headlines that were used in all three studies, and only included the accuracy and control conditions that were used in all three studies, leaving a nal sample size of 1,428. Political ideology (liberal vs. conservative) was a signi cant moderator of truth discernment, p = 0.039, partisan bias, p = 0.026, and incongruent true news, p = 0.019, such that the effects of incentives were considerably larger for conservatives than liberals. The mean effect sizes for liberals and conservatives are shown separately in Table 1. The effect of the incentives on truth discernment was not moderated by cognitive re ection, political knowledge, or affective polarization (ps < 0.327), which con icts with prior studies suggesting that more politically-knowledgeable 30 or re ective 48 people might be more susceptible to motivated reasoning. Political orientation moderated the effect of the incentives on truth discernment, partisan bias, and incongruent true news such that the effect was larger for conservatives than liberals.

Incentives Closed the Gap in Accuracy Between Liberals and Conservatives
Replicating prior work 19 conservatives and (unincentivized) liberals by 60%. This suggests that a substantial amount of US conservatives' poorer truth discernment may be due to lack of motivation more than lack of knowledge or ability.
Conservatives also showed more partisan bias than liberals: partisan bias was 2.55 points for unincentivized conservatives and 1. In other words, while conservatives initially expressed more partisan bias, incentives for accuracy closed this gap in partisan bias by 61% (Fig. 2).

Relative Importance of Accuracy Incentives
In each experiment, we measured other variables known to be predictive of truth discernment, such as cognitive re ection, political knowledge, partisan animosity, as well as demographic variables, such as age, education, and gender. We ran a multiple regression analysis on our IDA with all of these variables included in the model (Fig. 3, Panel A). To compare the relative importance of each of these predictors, we also ran a relative importance analysis using the "lmg" method 49 , which calculates the relative contribution of each predictor to the R 2 (Fig. 3, Panel B). Full models and relative importance analyses are in Supplementary Materials S3 and S4.

Discussion
Across three studies, including a replication with a nationally representative sample, we nd that increasing people's motivation to be accurate via a small nancial incentive of up to one dollar improved accuracy in discerning between true and false news, and decreased the partisan divide in belief in news by about 30%. These effects were driven by an increased belief in politically-incongruent true news (e.g., "Fox News" for Democrats or the "New York Times" for Republicans). Furthermore, providing people with an incentive to identify articles that would be liked by their political in-group (Experiment 2) reduced truth discernment and increased intentions to share politically-congruent true and false news. Thus, social goals to share articles that will be liked by one's in-group appear to interfere with accuracy goals. These effects were largely dependent on partisan source cues being present on the articles (Experiment 3), suggesting that people are primarily dismissing news from the other side because it comes from politically-incongruent sources. 12,13 These results suggest that the partisan divide in belief in news does not simply re ect different factual knowledge or prior beliefs due to selective exposure to different media sources 6 . Instead, a substantial amount of this partisan divide can be attributed to lack of motivation, or more speci cally, a motivated dismissal of true news from the opposing party. While it is often di cult to determine whether partisan differences re ect differing prior beliefs versus politically-motivated reasoning, 16,17 our studies provide causal evidence for the in uence of both accuracy and social motivations on belief. The results also reveal that motivation does not fully explain the partisan differences in belief: accuracy incentives only increased belief in true news from the opposing party, but did not in uence belief in false news -which people are exposed to much less frequently than true news 21 . This concords with other work nding that motivated cognition is constrained by people's ability to justify seemingly reasonable conclusions 14 .
While numerous studies have identi ed that conservatives tend to be worse at discerning fake news than liberals [19][20][21][22][23][24] , our studies nd that the gap between liberals' and conservatives' truth discernment closes by 60% when conservatives are motivated to be accurate. Thus, conservatives may not fall for fake news simply because of lack of knowledge or ability, but also because they have different motivations 27,28 . Future work should examine whether this assymetry arises due to the speci c dynamics of partisan identity in the United States, or if it re ects more general differences in thinking styles 27,50 , such as the tendency for conservatives across cultures to endorse more conspiracy theories 25 .
These results shed light on various theoretical accounts of belief in misinformation. While factors such as political knowledge or analytic thinking were predictive of truth discernment, supporting prior work 40,51 , factors such as motivation and ideology were as important, if not more, contradicting prior claims that partisan bias and motivation do not play a strong role in the sharing and belief of misinformation. 6,41 It appears that theoretical models of misinformation belief and dissemination will ultimately need to integrate numerous different factors to fully understand why people spread false news 2 .
These results also have practical implications for interventions targeting the belief and sharing of misinformation. Accuracy incentives improved the accuracy of beliefs, but they did not improve the quality of articles shared, illustrating a disconnect between accuracy judgements and sharing behavior 41,52 . However, social incentives increased the sharing of politically-congruent false (and true) news. Thus, increasing accuracy motivations may be useful for interventions aimed at improving the accuracy of beliefs, but this may not spillover into sharing behavior, 52-54 since sharing information online may re ect different motivations than accuracy alone (such as social goals) 38, 55,56 . Overall, this suggests that misinformation interventions may need to target belief and sharing behavior separately. While interventions targeting belief can increase motivations to be accurate, interventions targeting social media sharing may be more effective if they focus on decreasing motivations to share false content that receives high social reward-such as content that is negative about one's out-group 36 .
One limitation of this work is that our ndings could be subject to multiple interpretations. For instance, people may be guessing what they think fact-checkers believe, or incentives could be inhibiting motivated responding (or the tendency for people to purposely give the wrong answer out of support for their own party) 57,58 . However, self-reported data indicated that extremely few participants (3%) reported providing answers they did not believe just to receive money. Similarly, very few participants (5%) admitted to providing false answers just to express support from their own party (See Supplementary Appendix S1 for extended analysis). Thus, these interpretations are unlikely to fully explain the full pattern of results. However, participants spent more time on each news headline when incentivized, suggesting that incentives likely increased effort toward discerning the accuracy of each headline (See Supplementary Appendix S1). Another potential objection is that our results were stimuli-dependent 59 . To address this, we conducted analyses separately for each individual true and false news headline (See Supplementary Materials S2). The effects were remarkably consistent: incentives increased belief in all eight true items, and did not decrease belief in a single false item.

Conclusions
There is a large partisan divide in the kind of news liberals and conservatives believe in, and conservatives tend to believe in more false news than liberals. Yet, these differences are not immutable. Motivating people to be accurate via a small nancial incentive improves accuracy about the veracity of news headlines, reduces the partisan divide in belief, and closes much of the gap between conservatives and liberals in belief in misinformation. Theoretically, these results identify accuracy and social motivations as key factors in driving the belief and sharing of news. Practically, these results suggest that shifting motivations may be essential for misinformation-reduction interventions.

Methods
We report how we determined our sample size, all data exclusions, all manipulations, and all measures in the experiment. The research methods were approved by the University of Cambridge Psychology Ethics Committee (Protocol #: PRE.2020.110). These studies were pre-registered. Stimuli, surveys, data, code, and all pre-registrations are available on our OSF page: https://osf.io/75sqf/? view_only=623d654c17b94d958a0857c06b181073. Analyses were conducted using R version 4.0.1.
The experiment was run on November 30, 2020. We recruited 500 participants via the survey platform Proli c Academic 44 . Speci cally, we recruited 250 conservative participants and 250 liberal participants from the US via Proli c Academic's demographic pre-screening service to ensure the sample was politically balanced. Our a priori power analysis indicated that we would need 210 participants to detect a medium effect size of d = 0.50 at 95% power, though we doubled this sample size to account for partisan differences and oversampled to account for exclusions. 511 participants took our survey. Following our pre-registered exclusion criteria, we excluded 32 participants who failed our attention check (or did not get far enough in the experiment to reach our attention check), and an additional 17 participants who said they responded randomly at some time during the experiment. This left us with a total of 462 participants Materials.
The materials were 16 pre-tested true and false news headlines from a large pre-tested sample of 225 news headlines 60 . In total, eight of these news headlines were false, and eight of the news headlines were true. Because we were interested in whether accuracy incentives would reduce partisan bias, we speci cally selected headlines that had a sizable gap in perceived accuracy between Republicans and Democrats as reported in the pre-test, as well as headlines that were not outdated (the pre-test was conducted a few months before the rst experiment). Speci cally, we chose eight headlines (four false and four true) that Democrats rated as more accurate than Republicans in the pre-test, and eight headlines (four false and four true) that Republicans rated as more accurate than Democrats. See Supplementary Materials S5 for example stimuli and the OSF page for full materials.
Procedure News Evaluation Task.
Participants were shown these 16 news headlines, along with an accompanying picture and source (similar to how a news article preview would show up on someone's Facebook feed), and asked "To the best of your knowledge, is the claim in the above headline accurate?" on a scale from 1 ("extremely inaccurate") to 6 ("extremely accurate"). Afterwards, they were asked "If you were to see the above article on social media, how likely would you be to share it?" on a scale from 1 ("extremely unlikely") to 6 ("extremely likely").

Accuracy Incentives Manipulation.
Half of the participants were randomly assigned to a control condition, in which we explained the news evaluation task, but we did not provide any information about a bonus payment. The other half were assigned to an accuracy incentive condition. In this condition, we explained the news evaluation task, and then told participants they would receive a "bonus payment of up to $1.00 based on how many correct answers [they] provide regarding the accuracy of the articles. Correct answers are based on the expert evaluations of non-partisan fact-checkers." Speci cally, they received one dollar for answering 15 out of 16 questions correctly, and fty cents for answering 13 out of 16 questions correctly. Since we measured accuracy on a continuous scale, we told participants that "if the headline describes a true event, either 'slightly accurate,' 'moderately accurate,' or 'extremely accurate' constitute correct responses. Similarly, if the headline describes a false event, either 'extremely inaccurate,' 'moderately inaccurate,' or 'slightly inaccurate' constitute 'correct' responses." In other words, the continuous scale was measured dichotomously for the purposes of giving nancial incentives. Participants were also noti ed that all other questions would not affect their bonus payment. See Supplementary Materials S6 or the OSF for full manipulation text.
Other Measures.
We gave participants a 3-item cognitive re ection task 40 . We measured participants' political knowledge using a 5-item scale 37 and in-group love/out-group hate with feeling thermometers 61 . See Supplementary Materials S7 and the OSF for question text. These measures were repeated across all studies. Analysis.
For truth discernment, partisan bias, and sharing discernment, independent samples t-tests were used. While we asked participants to rate the truth of headlines on a continuous scale, these variables were recoded as dichotomous for analysis because the nancial incentive only rewarded participants based on whether they correctly identi ed a headline as true or false. Since we did not clearly specify this in the To test what types of headlines were affected by the incentives, we ran a 2 (accuracy incentive vs. no incentive) X 2 (politically congruent vs. politically incongruent) X 2 (true headlines vs. false headlines) mixed-design ANOVA with the percent of articles rated as accurate as the dependent variable, and then followed up with Tukey HSD post-hoc tests. Extended analyses are in Supplementary Materials S1.
The experiment was launched on January 22, 2021. We aimed to recruit 1000 total participants (250 per condition) via the survey platform Proli c Academic, though we over-sampled and recruited 1,100 to account for exclusion criteria. We chose this sample size because a power analysis revealed that we needed at least 216 participants per condition to detect the smallest effect size (d = 0.24) at 0.80% power using a one-tailed t-test (although two-tailed tests were used for all analysis). Once again, we used Proli c's pre-screening platform to recruit 550 liberals and 550 conservatives from the United States, and 1,113 participants took our survey. Following our pre-registered exclusion criteria, we excluded 76 participants who failed our attention check (or did not nish enough of the survey to reach the attention check) and an additional 39 participants who said they responded randomly at some point during the experiment. This left us with a total of 998 participants in total (463 M, 505 F, 30 transgender/nonbinary/other; age: M = 36.17, SD = 13.94; politics: 568 liberals, 430 conservatives). This experiment was also pre-registered (pre-registration available here: https://aspredicted.org/blind.php?x=/FKF_15L).

Social Incentives & Mixed Incentives Manipulations.
In the new social incentives condition, participants were rst asked before the experiment to report the political party with which they identify. Then, they were told that they would receive a bonus payment of up to $1.00 based on how accurately they identi ed information that would be liked by members of their political party if they shared it on social media. Bonuses were awarded based on how closely participants' answers matched partisan alignment scores from a pre-test 48 . Before each question about accuracy and sharing, participants were asked "If you shared this article on social media, how likely is it that it would receive a positive reaction from [your political party] (e.g., likes, shares, and positive comments)?" In the mixed incentives condition, participants were rst given nancial incentives for both correctly identifying whether the article would be liked by a member of their political party, and were then asked about accuracy and given incentives for identifying whether the article was accurate.
Additional Variables.

Analysis.
To understand the impact of accuracy and social incentives on truth discernment and partisan bias, we ran 2 (accuracy incentive vs. no incentive) X 2 (social incentive vs. no incentive) ANOVAs and followed up on the results using Tukey HSD post-hoc tests. To test what types of headlines were affected by the incentives, we ran a 2 (accuracy vs. no incentive) X 2 (social incentive vs. no incentive) X 2 (politically congruent vs. politically incongruent) X 2 (true headlines vs. false headlines) mixed-design ANOVA with the percentage of articles rated as accurate as the dependent variable, and then followed up with Tukey HSD post-hoc tests.
The experiment was launched on June 13, 2021. We aimed to recruit a nationally representative sample (quota-matched to the US population distribution by age, ethnicity, and gender) of 1,000 participants via the survey platform Proli c. As in studies 1 and 2, we ensured that the nationally-representative sample was politically balanced, or half liberal and half conservative. 1,055 total participants took the survey. Then, we once again excluded 95 participants who failed our attention check (or did not make it to that point in the survey), as well as 39 participants who said they were responding randomly at some point in the survey. This left us with a total of 921 participants (439 M, 470 F, 12 transgender/non-binary/other; age: M = 40.07, SD = 14.67; politics: 542 liberals, 379 conservatives). This experiment was also preregistered (pre-registration available at: https://aspredicted.org/7M2_9K9).

Materials.
We once again used the same 16 pre-tested true and false news headlines in addition to eight extra true and false news items from the same pre-test. For consistency, we report the results of the 16 news items in the manuscript, but we also report the results for the full set of 24 items in the Supplementary Materials S9, which did not change our conclusions. Manipulations.
In addition to the accuracy incentive and control condition, participants were assigned to identical accuracy incentive and control conditions without source cues present on the stimuli. In these conditions, the sources (e.g., "nytimes.com") were greyed out, so participants could only make assessments of the stimuli based on the photo and headline alone (see Supplementary Materials S5 for examples). Analysis.
To understand the impact of accuracy and social incentives on truth discernment and partisan bias, we ran 2 (Accuracy vs. Control) X 2 (Social vs. Control) ANOVAs and followed up on the results using Tukey HSD post-hoc tests. To test what types of headlines were affected by the incentives, we ran a 2 (accuracy incentive vs. no incentive) X 2 (social incentive vs. no incentive) X 2 (politically congruent vs. politically incongruent) X 2 (true headlines vs. false headlines) mixed-design ANOVA with the percent of articles rated as accurate as the dependent variable, and then followed up with Tukey HSD post-hoc tests. Figure 1 In Study 1, accuracy incentives improved truth discernment and decreased the partisan divide in accuracy judgements, primarily by increasing belief in politically-incongruent true news. Study 2 replicated these ndings, but also found that social incentives decreased truth discernment -even when paired with the accuracy incentive (the "mixed" condition). Study 3 further replicated these ndings, but also found they depended on a source being present on the headline. S = source, N = no source, and error bars represent 95% con dence intervals. *** p < 0.001, ** p < 0.01, * p < 0.05.

Figure 2
Conservatives were worse at truth discernment as compared to liberals (Panel A). However, paying conservatives to be accurate closed this gap between conservatives and (unincentivized) liberals by 60%.
It also closed the gap in partisan bias between conservatives and liberals by 61% (Panel B). *** p < 0.001, ** p < 0.01, * p < 0.05. Figure 3 In A, multiple regression results for the main outcome variables: truth discernment, partisan bias, and belief in incongruent true news. Standardized beta coe cients are plotted for ease of interpretation. In B, variable importance estimates (LMG values) with bootstrapped con dence intervals are shown to examine the estimated percentage contribution of each predictor to the R 2 .