Morality in Minimally Deceptive Environments

Psychologists, economists, and philosophers have long argued that in environments where deception is normative, moral behavior is harmed. In this article, we show that individuals making decisions within minimally deceptive environments do not behave more dishonestly than in nondeceptive environments. We demonstrate the latter using an example of experimental deception within established institutions, such as laboratories and institutional review boards. We experimentally manipulated whether participants received information about their deception. Across three well-powered studies, we empirically demonstrate that minimally deceptive environments do not affect downstream dishonest behavior. Only when participants were in a minimally deceptive environment and aware of being observed, their dishonest behavior decreased. Our results show that the relationship between deception and dishonesty might be more complicated than previous interpretations have suggested and expand the understanding how deception might affect (im)moral behavior. We discuss possible limitations and future directions as well as the applied nature of these �ndings.


Introduction
Dishonesty, in the forms of fraud, bribing, and cheating, has been found to seriously harm the economy (Cebula & Feige, 2012;Gee & Button, 2019;Warren & Schweitzer, 2019), decrease societal trust (Kirchler et al., 2008;Banerjee, 2016;Butler et al., 2016), escalate corruption (Olsen et al., 2019) with shocking results (Ambraseys & Bilham, 2011), spread further unethical behaviors (Robert & Arnab, 2013), and damage cultural, organizational, and social norms (Gächter & Schulz, 2016).People regularly have opportunities to behave dishonestly (Ariely, 2012).However, these decisions are not made in a vacuum.People typically make moral decisions within an organizational, social, or cultural environment.In other words, moral decisions are normative, which means that they are made in the context of normative principles acknowledged or rejected by others (Habermas, 1990;Stansbury, 2009).Therefore, a critical question is: what happens to our honesty in environments where social norms allow for deception?How does a context that permits or tolerates deception affect our moral judgments and subsequent behavior?Some possible answers to these questions can be drawn from the psychological theory of selfconcept maintenance (Mazar et al., 2008;Hilbig & Hessler, 2013;Thielman & Hilbig, 2019).According to this model, people make decisions regarding their possible engagement in dishonest conduct by balancing two motivations; 1) the desire to maintain a positive self-image as an honest person and 2) the desire to use the means at their disposal (i.e., deception) to maximize personal welfare.That is, the more justi able a dishonest act is, the easier it is for an individual to sustain a positive self-concept (i.e., a view of oneself as a moral individual) which consequently leads to a greater magnitude of dishonesty.Hence, self-concept maintenance theory predicts that people will engage in dishonesty when it is justi able (Schweitzer et al., 2002;Shalvi et al., 2012).Following this logic, we could formulate two opposing hypotheses: 1) people would be less dishonest within an environment in which deception is publicly acknowledged since they might suspect a lack of anonymity, which could translate to concerns about hurting one's positive self-concept, due to the risk of being exposed as a dishonest individual or 2) people would be more dishonest within a deceptive environment since this environment would provide them with the necessary justi cation to act dishonestly to increase their utility (e.g., by gaining a higher monetary outcome through cheating).In other words, environments in which deception is allowed-if social norms in an environment condone deception-make deceptive behavior justi able because it is normative (socially appropriate behavior; see Köbis et al., 2018).Social norms theory (Cialdini et al., 1990;Bicchieri, 2006) and social exchange theories (Cropanzano & Mitchell, 2005) suggest that people learn about and adapt to social norms by interacting with and reciprocating other people's behavior.Hence, according to the premise of such theories, in environments that permit deception, our own behavior will adapt to re ect this, consequently undermining honesty.In other words, in environments that permit deception, deceptive behavior can be understood by actors as a social norm [1].Empirical data supporting the in uence of this "deceptive social norms hypothesis" can be found in research on collaborative dishonesty and the spread of deception.Speci cally, research has suggested that at times, collaborative settings (Weisel & Shalvi, 2015), market interactions (Falk & Szech, 2013), negotiation cues (Rees et al., 2019), and contexts of commitment (van Baal et al., under review) increase dishonest tendencies.For example, deception has been shown to increase lying (Boles et al., 2000;Croson et al., 2003), decrease trust and ethical behavior (Schweitzer et al., 2006), and has been de ned as morally unacceptable to the larger community (Schweitzer & Gibson, 2008).Similarly, cues of criminal behavior might further encourage unethical behavior (Keizer et al., 2008).The in uence of deceptive social norms is further corroborated by research on the contagion of dishonesty, which has found that signals of dishonesty create more dishonesty (Robert & Arnab, 2013), that bribing negatively affects downstream moral behavior (Nichols et al., in review), and that corruption undermines cooperation (Muthukrishna et al., 2017).A meta-analysis by Bellé and Cantarelli (2017) reinforces the robustness of these ndings by concluding that increases in unethical behavior are predicted by exposure to the unethical behavior of others.
Beyond small interpersonal (e.g., dyadic) settings, there is research suggesting that larger organizational and cultural settings can in uence dishonesty.However, the evidence from these settings seems more mixed.For example, Cohn and colleagues (2014) found that salient cultural cues, particularly participants' professional identity in the banking business sector, undermined honesty norms.However, Rahwan and colleagues (2019) did not nd such an effect.Falk and Szech (2013) found that market interactions impair morality (see also ;Gerlach, 2017) and Ariely and colleagues (2019) extended this evidence to other economic environments, showing that long-term exposure to a speci c economic system, such as socialism, can have negative implications for moral behavior.Based on a series of crosscultural experiments, Gächter and Schulz (2016) reported a link between rule violation in society and individual dishonesty.Additionally, Hugh-Jones (2016) found evidence of a positive relationship between honesty and economic growth, while three other cross-cultural studies found no such link (Pascual-Ezama et al., 2015;Dieckmann et al., 2016;Mann et al., 2016).One such study, however, indicated the possibility that a person's closer network can predict lying tendencies (Mann et al., 2014).Finally, research exploring the connections between social context and moral behavior has found that identity (Dreber & Johannesson, 2008) and other social and religious norms (Mazar & Aggarwal, 2011;Piff et al., 2012;Lang et al., 2016;Mitkidis et al., 2017;Nichols et al., 2020) can lead to differential results in regard to dishonest behavior.Overall, evidence on larger organizational and cultural settings in uencing dishonesty is mixed.
In the same vein, a meta-analytical study (Gerlach et al., 2019) found evidence that some situational factors, such as the experimental setting and normative cues (socially appropriate behavior), might in uence dishonest behavior but provided intriguing indications regarding the effect of deception on subsequent dishonesty.Speci cally, the authors reported that in experimental settings where participants were themselves deceived, they lied less compared to participants who were not deceived.Another recent meta-analytical study found corroborative evidence that experimental deception is associated with less collaborative dishonesty (Leib et al., 2021).These correlational ndings might be a natural consequence of the aforementioned evidence of cultural in uence on deception and the differential effects observed when studying deception in laboratory settings (for example, see Bonetti, 1998;Ortmann & Hertwig, 2002), with some literature suggesting that there must be a morally undesired effect of laboratory deception on downstream behavior (e.g., Stricker et al., 1967;Christensen, 1977;Jamison et al., 2008; however see Barrera & Simpson, 2012).
A different, nonmutually exclusive explanation of this correlational nding is that participants exposed to deceptive experimental designs might be suspicious of their personal anonymity.The effects of suspicion on experimental control and on participants' behavior are not negligible (Hertwig & Ortmann, 2008a) and when detection seems possible, participants in deceptive environments might therefore behave more honestly because they fear being exposed as liars (Kimmel, 1996, p 68;Gneezy et al., 2018).Yet, prior research has found con icting evidence on this matter, by suggesting that participants alter their behavior only when they have speci c, as opposed to general, knowledge about experimental deception (Ortmann & Hertwig, 2002;Hertwig & Ortmann, 2008a, 2008b).

Overview of the current research
In this paper, we test whether deceptive environments, as indexed by experimental deception in lab and online settings, have an effect on downstream dishonesty, and if anonymity (or lack thereof) affects that behavior.Consistent with prior research, we de ne deception as the transmission of information that is intentionally erroneous or that intentionally misleads others (Gino & Shea, 2012;Levine & Schweitzer, 2015).Based on that de nition of deception, we claim that our environments can be termed minimally deceptive; our deceptive environments are neither necessarily environments permeated by fraudulence and dishonesty, nor environments where unethical behavior is always[2] encouraged or positively reinforced, but environments or situations where the norm is that an individual might be deceived.Such environments appear to be around us in many contexts and aspects of daily life, such as in therapy and medicine, education, caregiving, and parenting[3], where for example white lies might become the rule rather than the exception.We predicted that such deceptive environments negatively affect ethical behavior relative to an environment where deception is not tolerated.We further predicted that being observed, regardless of the environment, will make participants responses more truthful.
To understand the effect of deceptive environments on ethical behavior, we thus examined the mechanism in a series of laboratory and online experiments, where we manipulated the information participants received about the lab policy/IRB and controlled for 1) the victim of dishonesty, 2) the duration of the behavior, and 3) whether the actual manipulation of deception was realized by participants.Across three well-powered experiments, we demonstrate that minimally deceptive environments generally do not affect dishonesty except for when participants' individual behavior can be observed.If so, deceptive environments decrease dishonesty, making people more honest, as the fear of detection is higher.Hence, our ndings are in direct support of the notion that experimental designs which utilize deception, induce a fear of detection among participants, elicited by a perceived lack of anonymity in such experimental settings.
We began our primary investigations with a laboratory study (Study 1) in which participants were randomly assigned to read and sign one of three versions of a consent form.Crucially, the consent forms in Study 1 included a description of the laboratory's deception policy or, in the control group, did not describe the laboratory's deception policy.In the treatment conditions, participants learned that the experiment could or could not contain misleading information about the experiment.Manipulating awareness that deception could (not) occur in the laboratory allowed us to make more direct inferences about the role of deceptive environments in dishonest behavior.After obtaining participants' consent, participants completed a repeated and incentivized private die-roll task [4] (Fischbacher & Föllmi-Heusi, 2013), wherein they could act dishonestly by overreporting their score, and thus also their earnings, in the game.The results from Study 1 indicate that being in a deceptive environment did not predict consequential cheating behavior.More precisely, the decision to cheat in a die-roll task was unaffected by the presence (or absence) of laboratory policies on deception.
To further investigate and to test the robustness of this nding, we report the results from two online studies (Studies 2 and 3), which tested larger samples to replicate and expand on the null effect of experimental deception on dishonest behavior.In Studies 2 and 3, participants were informed that the policy of the Institutional Review Board (IRB) allows experimental deception (vs.prohibits vs. control).In Study 2, we used a similar design to test the robustness of the null nding of Study 1 in regard to cheating behavior in a die-roll task.We also extended our investigation by including one-shot die-roll tasks, allowing us to test for differential effects on dishonest behavior due to potential learning effects in shorter (one-shot) vs. longer-term (repeated; Study 1) tasks.Moreover, we introduced a different task, the sender-receiver game [5] (Gneezy, 2005;Capraro, 2018), to examine dishonesty.In the sender-receiver game, lying victimizes another participant instead of the experimenter (as is the case in our die-roll task).
The results from Study 2 were consistent with those from Study 1; again, we observed no effect of deceptive environments on dishonest behavior.
Finally, in Study 3, we used a similar design as in studies 1 and 2 and further tested a potential mechanism.Speci cally, we included two additional conditions to examine whether cues of anonymity and being observed could affect dishonest behavior.We again informed participants that the policy of the IRB allows for experimental deception (vs.prohibits vs. control) in conditions where their die-roll performance could be observed (vs.unobserved) and then asked them to complete a one shot die-roll task.We found that informing participants about nondeceptive environments leads to less dishonesty and that when deception is allowed and participants are aware of being observed, then honesty increases.Consequently, this echoes previous theoretical accounts (Ayal et al., 2015) and ndings (Zhong et al., 2010;Ernest-Jones et al., 2011;Nettle et al., 2012;Pfattheicher & Keller, 2015;Schild et al., 2019) that cues of being observed (i.e., anonymity or visibility; see Abeler et al., 2019) decrease cheating, which suggests that participants are more skeptical about their anonymity when experiencing experimental deception.

Open Science Statement and Ethics
All studies [6] were preregistered and any deviations from the preregistered protocol are mentioned [7].All studies followed open science practices: all materials, including the source code, collected anonymized raw and processed data, consent forms, stimuli, and surveys, were shared and made publicly available as online supplementary material (SM) of the project [Anonymized for peer-review] (https://osf.io/ca769/?view_only=c137cb475eb74d3f831544d54b1c97f8).The codes for data management and statistical analyses were written in the statistical environment R (version 4.0.3).No conditions or variables were dropped from any analyses we report (except in Study 1, see footnote #10).We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study (see, Simmons, et al., 2012).All protocols were approved by the IRB at Duke University (#A0642) or the Committee on Health Research Ethics for Region Midtjylland (107/2019 (sagsnr. 1-10-72-1-19)) and the Aarhus University Research Ethics Committee (Journal no.: 2020-0169943, Serial Number: 2020-76).
[1] For a review on social norms as both psychological states and collective constructs and on how social norms inform action-oriented decision-making, see Legros & Cislaghi, 2020. [2] Although often unethical or sel sh behavior is compensated higher than ethical behavior.
[3] We further elaborate on these examples in the discussion.
[4] The die-roll task is a commonly used paradigm to measure cheating behavior.In the standard form of the task (one-shot individual decision-making), the participant is asked to roll a 6-sided die under a cup to ensure non-detection and more than once to test whether the die is fair.However, the participant is instructed to report the number rolled the rst time, which will determine the participant's payment.In this task, cheating means reporting a higher number than the actual number rolled the rst time.
[5] The sender-receiver game is based on cheap-talk communication and is used to measure lying to another participant (instead of the experimenter).In its most standard form, it involves two players (Player 1 and Player 2): Player 1 receives, from the experimenter, some information about a monetary allocation that is pro table to either Player 1 or Player 2. Player 1 is then asked to report the allocations to Player 2. Player 1 can thus decide to either truthfully or deceitfully convey information about the allocations to Player 2. Finally, Player 2 can decide to believe or distrust the report of Player 1.
[6] In the experimental studies, we employed deception.We argue that our decision is justi able because it enabled us to maintain experimental control of our manipulation to solve the debate.Furthermore, the use of deception in our studies did not result in any harm to participants, except from what is to be expected in normal everyday life, which was also highlighted in our consent forms.Participants were fully debriefed.
[7] We did not preregister our use of equivalence testing.That is, we did not preregister that in the case of null-results we would use Two One-Sided Tests (TOST's) to test for the absence of an effect against our a-priori hypothesized effect size.However, we did so to increase the robustness of our reported results.Additionally, sensitivity power analyses were performed after revision request.Finally, for Study 1, instead of employing a multivariate generalized linear mixed model, we analyzed the two dependent variables in separate regression models.We opted for a simpler model to more concisely convey the results, as these were consistent with the pre-registered multivariate model.

Participants and Sample Size Estimation
Two hundred and four adult participants were recruited using the Sona subject pool at Duke University, following our preregistered sample size estimation (see SM) [8].Our preregistered sample size, estimated by an a priori power analysis performed through a simulation in the statistical environment R, indicated that to have 81.3% power for detecting a medium-sized effect (d = .5;see Cohen, 1988), with an alpha level of .05 for a mixed between within-subjects design with three conditions and 20 die-roll trials per participant[9], a sample size of 190 participants is required.Following preregistered exclusion criteria, individuals who did not complete the whole study were excluded, resulting in 202 participants remaining for the analysis (mean age = 23.51,SD = 4.41, 41.09% male, 57.92% female, 0.99% other).All participants were compensated for completing the study (avg.$17.98/hr).A sensitivity power analysis for a repeated measures ANOVA (alpha = .05,80% power, 20 measurements, 3 groups) indicated that our nalized, analyzed, sample would be able to uncover a minimum effect size of f = 0.16.

Materials and Procedure
Initially, participants were seated in individual booths to ensure privacy.Afterward, participants were given a typical consent form informing them about the tasks, risks and bene ts and time required for the study, screening criteria, and treatment of their personal data and were asked if they wanted to continue.Participants who signed this initial consent form were randomly assigned, via computer-generated randomization, to one of three conditions: No-Deception, Deception-Allowed, or Control.This manipulation was operationalized as a second form that participants were asked to sign, similar to the consent form.
Participants allocated to the no-deception condition saw a form indicating that the laboratory prohibits the use of deception in all experimental protocols taking place in the facility.Participants allocated to the deception-allowed condition saw a form indicating that the laboratory allows the use of deception in all experimental protocols taking place in the facility.Finally, participants allocated to the control condition did not receive any information regarding the laboratory's policy on deception and only saw the initial consent form.The primary outcome measure was cheating quanti ed in a 20-shot die-roll task, where 1 pip = 10¢, 2 pips = 20¢, 3 pips = 30¢, 4 pips = 40¢, 5 pips = 50¢, and 6 pips = 60¢, in each round.[10]Next, participants answered a set of questionnaires assessing demographics, general honesty, and personality measures (see SM) before being thoroughly debriefed.

Comprehension check
Participants completed the instruction comprehension check, which asked them to show that they understood how earnings from the die rolls would be calculated.All participants correctly answered the instruction comprehension check question and correctly estimated the hypothetical earnings of die rolls.

Die rolls
First, we analyzed dishonesty in the 20-shot die-roll task across conditions.A Kolmogorov-Smirnov test indicated that the distribution of die rolls across conditions was signi cantly different from a uniform distribution (D = 0.374, p <.001).The results indicated that participants on average claimed $7.68 (SD = 1.30), which is signi cantly higher than the expected claim of $7.00 (i.e., the sum of Ø3.5¢ claims in 20 die rolls using a fair die; one sample t test; t(201) = 7.37, 95% CI [0.37, 0.67], d = 0.52, p <.001), indicating that at least some participants across conditions used the opportunity to cheat in the game.Interestingly, six participants reported rolling six all twenty times, maximizing the monetary outcome in the experiment, thus earning the maximum payoff of $12.
Next, we analyzed cheating between the three experimental conditions.Here, we used a one-way analysis of variance (ANOVA) to explore differences in claims between the three conditions (no-deception vs. deception-allowed vs. control).The average claim for participants in the control condition (M = $7.74, SD = $1.31), the No-Deception condition (M = $7.56, SD = $1.16) and the Deception-Allowed condition (M = $7.71, SD = $1.42) were almost identical and thus we observed no signi cant difference in claims between the three conditions (F(2, 199) = 0.41, p =.666; η 2 = 0.00).
To further corroborate this result, we used equivalence testing to establish whether the null hypothesis (i.e., no difference in dishonesty between conditions) was in fact more likely to be true than the hypothesis that different policies on deception would result in different levels of dishonest behavior.[11] For the two-one-sided-test (TOSTs; α =.05) between the no-deception condition and the control condition, the equivalence test was signi cant (t(131.1)= 1.96, 90% CI [-0.56 0.16], p =.026).Similarly, the test between the deception-allowed condition and the control condition was signi cant (t(133.14)= 2.70, 90% CI [-0.44, 0.34], p =.004).Finally, the test between the no-deception condition and the deception-allowed condition was signi cant, t(128.64)= 2.23, p =.014, 90% CI [-0.52, 0.22].Based on the combined results of the equivalence test and the null hypothesis test, we can conclude that the observed effect is not signi cantly different from zero and statistically equivalent to zero.The average claims in the die-roll task per experimental condition per round (Figure 1) suggest that dishonesty in the repeated measures die-roll task did not deviate based on the manipulation of the deception condition.

Discussion
The results from Study 1 indicate that deceptive environments, in the form of experimental deception, did not affect downstream, dishonest behavior.These results provide initial evidence that deceptive environments neither intensify nor inhibit dishonest behavior.
To replicate and further extend our ndings, in the next study, we kept the die-roll task and added the sender-receiver game.This addition allowed us to test whether the effect of experimental deception on dishonest behavior is observed in speci c tasks only.Additionally, sender-receiver games differ from the other tasks in several regards.First, in sender-receiver games, participants (supposedly) interact with another participant to whom they can send a truthful vs. a misleading message.Therefore, behaving dishonestly in sender-receiver games means lying to another participant rather than the experimenter (Baumard & Sperber, 2010;Frollová et al., 2021).Second, in sender-receiver games, individual decisions can be linked to individual participants.In other tasks, dishonest behavior is measured by comparing the aggregated choices of a large group of participants to the distribution of outcomes expected from honest participants.
[12] By design, die-roll tasks might therefore provide some form of anonymity to participants: individuals cannot be exposed for behaving dishonestly.For example, it is always possible that a single participant in fact observed the highest pip claimed.The aggregated result would only seem dubious if all participants claimed the highest pip.
Hence, one reason why deceptive environments in sender-receiver games could cause less dishonest behavior among participants could be that participants fear they might be exposed as liars and in response, show less dishonest behavior (Thielmann & Hilbig, 2018).Another explanation is that participants may not believe that their interaction partner truly exists.Hence, there is no reason to behave honestly and claim less for oneself.Furthermore, exposure to any form of deception may engender participant suspicion if participants become aware of it, as they may suspect that they are also being lied to about the promise of anonymity.
Furthermore, to eliminate potential learning effects, where participants gain experience with the task (Kroher & Wolbring, 2015), Studies 2 and 3 used one-shot tasks to measure dishonesty.Moreover, because we observed relatively low rates of cheating in our laboratory study, we made cheating opportunities more salient to participants by altering the payoff scheme: reporting 6 pips would result in a payment of zero, so participants who actually observed a 6 would be potentially disappointed and thus more inclined to misreport their observation (Fischbacher & Föllmi-Heusi, 2013).
Additionally, to extend the ndings to samples more diverse than student samples, we used online samples.This choice allowed us to increase the number of participants, making it more likely to detect small effects with su cient power.And it allowed us to go around an inherent limitation of Study 1, that of the deceptive policy of the laboratory we ran the study at.The change from laboratory to online studies meant that we also had to modify the stimuli to make them more suitable for online studies.Thus, in Studies 2 and 3, we informed participants that it is the IRB policy (rather than the laboratory's policy) that allows (vs.forbids) deception.Finally, a central limitation to our rst study was that we could not validate if participants actually read the information presented in the consent forms (Douglas et al., 2021).To ensure that participants paid attention to the stimuli, we added additional attention, comprehension, and manipulation checks.
[8] Due to clerical mistake, in the preregistration of this study we reported that some data had already been collected.However, this was not the case.The only data collected prior to the study was a small pilot test used to train the research assistants in conducting the study properly.
[9] In addition to being able to test the duration of the behavior, we used a repeated die-roll task to obtain more statistical precision to identify a small effect.
[10] In this study, we additionally used a Dictator Game (DG) to measure generosity.We did not use it in Studies 2 and 3, as we opted there to focus on only dishonest behavior.The analyses (also for order effects) and results of this extra variable can be found in the SM.Deceptive policy had no effect on giving in the DG.The die-roll task was combined with the DG in random order (block-randomized), so approximately 50% of the participants rst played the die-roll task, whereas the other 50% rst played the DG.Participants were also asked to test-to-play both tasks in a few trials before the actual, incentivized task and answered instruction comprehension checks for both tasks.
[11] See Lakens et al. (2018) for a thorough explanation of this procedure, including a description of the use of 90% con dence intervals that include zero.
[12] In some form of the matrix tasks, experimenters deceive participants so that experimenters know how many matrices each participant actually solved, similar to sender-receiver games.However, the procedure is unbeknownst to participants.It allows experimenters to directly compare the number of supposedly solved matrices to the actually solved matrices (e.g., Mazar et al., 2008).This type of experimental deception is associated with less dishonest behavior among participants (Gerlach et al., 2019, table 2).

Study 2 Method Participants and Sample Size Estimation
Nine hundred and twenty-four adult participants were recruited through the online platform Proli c Academic[13], according to our preregistered sample size estimation (see SM).Following preregistered exclusion criteria, individuals who did not complete the whole study or who failed the instruction comprehension check question were excluded [14], resulting in 640 participants remaining for the analysis (mean age = 36.52,SD = 12.41, 37.2% male, 62.7% female, 0.20% "other" and 43.6% having at least a BA degree).All participants were compensated for completing the study (avg.£22.26/hr + bonus £1).The study lasted approximately 10 minutes.A sensitivity power analysis using a one-way ANOVA (alpha = .05,80% power, 3 groups) indicated that our nalized, analyzed, sample would be able to uncover a minimum effect size of f = 0.12.

Materials and Procedure
Initially, participants were given a typical consent form informing them about the tasks, risks and bene ts and time required for the study, screening criteria, and treatment of their personal data and were asked if they wanted to continue.Participants who signed this initial consent form were randomly assigned, via computer-generated randomization, to one of three conditions: no-deception, deception-allowed, or control.The conditions were operationalized as a second form, similar to the consent form, where participants were asked to indicate that they understood the content and wanted to proceed with the experiment.Participants were made aware that they could proceed only after 20 seconds.
Participants allocated to the no-deception condition were informed that the IRB prohibits the use of deception in all experimental protocols and does not allow giving participants misleading or erroneous information about (elements of) the study conducted.Participants allocated to the deception-allowed condition were informed that the IRB allows the use of deception in all experimental protocols and allows giving participants misleading or erroneous information about (elements of) the study conducted.Finally, participants allocated to the control condition did not receive any information regarding the laboratory's policy on deception and learned only that the IRB has approved this experimental protocol.Afterward, participants were asked to answer the attention and instruction comprehension check questions to ensure that they had understood and carefully read the information provided.
Participants were then asked to roll an actual physical die at their convenience, where the primary outcome measure was cheating quanti ed in this one-shot die-roll task as follows: 1 pip = 10¢, 2 pips = 20¢, 3 pips = 30¢, 4 pips = 40¢, 5 pips = 50¢, but 6 pips = 0¢.In addition, participants played a senderreceiver game in which cheating was quanti ed as sending truthful or deceitful information to another player.Unbeknownst to participants, in this version of the task, our participants were always Player 1, and there was no other participant (Player 2).Additionally, to control for potential punishment or reputation fears, we informed our participants that Player 2 would never be told about the nature of the information sent.The order in which the two tasks were completed was counterbalanced.
Subsequently, participants answered a set of questionnaires concerning demographics, honesty, and personality measures (see SM) before being thoroughly debriefed.

Comprehension check
Based on the condition to which participants were assigned, they answered a question asking 1) whether deception was allowed in the experiment, 2) whether deception was not allowed in the experiment, or 3) whether the IRB provided any information about deception being allowed in the experiment [15].In accordance with our preregistration, 108 participants did not pass the instruction comprehension check and were consequently excluded from the subsequent analysis (see SM).

Die rolls
Initially, we analyzed dishonesty in the individual one-shot die-roll task across conditions.A one-sample Kolmogorov-Smirnov test indicated that reported rolls across conditions were signi cantly different from a uniform distribution (D = 0.181, p <.001).Turning to earnings in the task, we found that participants reported a mean claim of 32.61¢ (SD = 13.92¢), which is signi cantly higher than the expected mean claim of 25¢ in the task (one sample t test; t(639) = 13.83,d = 0.55, p <.001).This result indicated that at least some participants in ated their earnings in the task by acting dishonestly.
We then estimated the percentage of potentially dishonest individuals in the task, as well as the percentage of individuals who acted dishonestly to maximize their payoff.As 5.13% of participants reported a payoff of 0, we estimated the upper limit of unconditionally honest participants to be 30.78%(i.e., 6 × 5.13%) [16].Thus, 30.78% is an upper limit for the number of honest participants.Next, we calculated the percentage of individuals acting to maximize their payoff in the task as follows: 20.16% of individuals in the sample claimed a 5. Assuming that all participants who actually rolled a 5 would also report having rolled a 5, we estimated the maximal percentage of payoff maximizers to be 4.19% (i.e., (20.16% − 1/6) × 6/5)) [17], indicating that few people in the sample actually acted as such.The distribution of reported claims between conditions is illustrated in Figure 2.
Next, we analyzed cheating between conditions to identify differences in individual dishonesty patterns between the three conditions (i.e., no-deception vs. deception-allowed vs. control).The average claim for participants in the control condition (M = 31.8¢,SD = 14.4¢), the no-deception condition (M = 32.1¢,SD = 14.1¢) and the deception-allowed condition (M = 33.9¢,SD = 13.3¢) were close to identical and thus a one-way analysis of variance (ANOVA) suggested no signi cant difference in cheating between conditions (F(2, 637) = 1.42, p = 0.242; η p 2 = 0.00).Thus, these results suggest that irrespective of whether participants were made aware either that 1) deception was used in the experiment or 2) no deception was used in the experiment or were 3) simply not given any information regarding the deception policy of the study, such policies (or the lack thereof) did not affect participants' inclination to behave (dis)honestly in the die-roll task.
To further corroborate this result, we used equivalence testing to establish whether the null hypothesis (i.e., no difference in dishonesty between conditions) was in fact more likely to be true than the hypothesis that different policies on deception would result in different levels of dishonest behavior.The equivalence test was signi cant for the two-one-sided-test (TOSTs; α =.05;) between the 1) no-deception condition and the control (t(389.82)= -3.36,90% CI [-2.00, 2.60], p <.001), 2) the deception-allowed condition and the control (t(375.69)= -1.97,90% CI [-0.21, 4.41], p =.025) and 3) the no-deception and the deception-allowed condition (t(450.93)= -2.334,90% CI[-0.317,3.917], p = .010).Hence, based on the combined results of the equivalence test and the null hypothesis test, we conclude that the observed effects of condition on dishonesty were not signi cantly different from zero and instead were statistically equivalent to zero.

Sender-Receiver
Across conditions, 40.8% of participants in the sender-receiver game acted dishonestly by sending a deceitful message to the other player and thus gaining a higher outcome in the task.
Between conditions, we observed this behavior in 36.1% of participants in the no-deception condition, in 42.9% of participants in the deception-allowed condition and in 44.6% of participants in the control condition.On the condition level, a X 2 -test between the no-deception and control condition revealed no signi cant difference between the two conditions (X 2 (1) = 2.82, p =.093); the same was found between the no-deception and deception-allowed condition (X 2 (1) = 1.96, p =.162) and between the deception-allowed and control condition (X 2 (1) = 0.05, p =.821).Furthermore, predicting sender-receiver outcomes in a logistic model with the control condition as the intercept yielded no signi cant predictive power of the no-deception condition (b = -0.35,95% CI [-0.75, 0.04], OR = 0.70, p =.076) or the deceptionallowed condition (b = -0.07,95% CI [-0.47, 0.33], OR = 0.94, p =.740).

Discussion
Consistent with our laboratory study ndings, individuals do not behave more dishonestly when in deceptive environments, whether cheating victimizes the experimenter (or the lab/institution) or when it affects another participant.Importantly, participants recognized the environment they were in (deceptive vs. nondeceptive), but this recognition did not affect their cheating behavior.
A crucial null nding was that deceptive environments did not cause less dishonest behavior in the sender-receiver game, as participants feared they might be exposed as liars and thus showed less dishonest behavior.Might this nding be due to the particularities of the sender-receiver game, i.e., participants not believing that their partner interaction truly exists?Additionally, as argued earlier, previous research has shown that being detected as a cheater decreases dishonest behavior (Thielmann & Hilbig, 2018); therefore, people might feel more observed in environments with deception and consequently cheat less.To directly test whether feeling observed in deceptive environments can affect behavior, in the following study, we added two conditions, observed vs. unobserved (explained below).
[13] Proli c Academic permits deception but requires the use of a "deception" pre-screener, which means that when registered to the platform, some participants indicated that they would be "happy to be deceived will be invited to" a study and "Participants are not able to see which pre-screeners have been applied to studies, and therefore will have no indication that the study may involve deception."(Proli c Team, 2021).
[14] The number of excluded participants is fairly balanced across conditions, therefore minimizing chances that our effects are due to systematic selection.Speci cally, 7.58% of participants were excluded in the No-Deception condition, 12.9% in the Deception-Allowed condition and 16.1% in the control condition.
[15] The exact questions were: "To verify that you have carefully read and understood the consent form and that you pay attention, we will ask you a few questions.Please, make sure to take your time and think about the answers.In the previous page you consented to the statement of the IRB (institutional review board).What did the IRB state about the use of deception in this study?1.The IRB prohibited the use of deception, 2. The IRB allowed the use of deception, 3. The consent form in the previous page did not provide information on the use of deception.This is an attention check.Please answer: Strongly agree." [16] Since we can calculate the base rate of unconditionally honest individuals (i.e., 5.13%) based on the participants who report a 6, we can assume that this base rate holds across every outcome (i.e., 1-6) and in this way can thus estimate the total number of unconditionally honest individuals, as suggested by Fischbacher & Föllmi-Heusi, 2013. [17] The multiplication with 6/5 is essential to include to account for the "payoff maximizers" who actually rolled a 5.

Study 3 Method Participants and Sample Size Estimation
One thousand and four adult participants were recruited through the online platform Proli c Academic, according to our preregistered sample size estimation (see SM).Following preregistered exclusion criteria, individuals who did not complete the whole study or who failed to correctly answer the instruction comprehension check question were excluded, resulting in 832 participants remaining for the analysis (mean age = 34.49,SD = 11.65,35.22% male, 64.18% female, 0.60% "other" and 43% having at least BA degree).All participants were compensated for completing the study (avg.£18.95/hr + bonus £0.50).The study lasted approximately 8 minutes.A sensitivity power analysis using an ANOVA with one interaction (alpha = .05,80% power, 3 groups, 2 covariates) indicated that our nalized, analyzed, sample would be able to uncover a minimum effect size of f = 0.14.

Materials and Procedure
The design of the study (primary outcome measure, set of questionnaires, and debrie ng) was identical to Study 2, with two alterations: 1) we did not include the sender-receiver game, and 2) we included two additional conditions (observed vs. unobserved), as described below.
Initially, participants saw a typical consent form informing them about the tasks, risks and bene ts and time required for the study, screening criteria, and treatment of their personal data and were asked if they wanted to continue.Participants who signed this initial consent form were randomly assigned to one of 3 × 2 conditions: no-deception vs. deception-allowed vs. control treatment × observed vs. unobserved.Similar to Study 2, the manipulation of deception (no-deception vs. deception-allowed vs. control) was operationalized as a consent form.The manipulation of observation at the individual level (unobserved vs. observed) was operationalized as two different ways of rolling the die.Participants either rolled an actual die at their convenience (unobserved) or rolled a die that we programmed within the online survey program (observed).In the unobserved condition, we thus measured dishonesty on the aggregate level only, as in Studies 1 and 2. That is, the unobserved condition only yielded the distribution of reported pips.In contrast, in the observed condition, we measured dishonesty on the individual level.That is, individuals could be identi ed as cheaters by comparing the pip(s) they saw against their actual reporting.

Comprehension check
Based on the condition to which participants were assigned, they answered a question asking 1) whether deception was allowed in the experiment, 2) whether deception was not allowed in the experiment or 3) whether the IRB provided any information about deception being allowed in the experiment.Passing the instruction comprehension check was interpreted as a successful implementation of the respective stimuli.All participants passed their respective comprehension check.

Die rolls
First, we analyzed dishonesty in the die-roll task across conditions.A one-sample Kolmogorov-Smirnov test indicated that reported rolls across conditions were signi cantly different from a uniform distribution (D = 0.129, p <.001).This result provided an initial indication that at least some participants took the opportunity to cheat in the task.
Next, we analyzed the reported claims in the die-roll task across conditions.Here, we found that participants reported a mean claim of 27.90¢ (SD = 16.51¢), which is signi cantly higher than the expected mean claim of 25.00¢ in the task (one sample t test; t(831) = 5.06, d = 0.18, 95% CI [0.11, 0.24], p < .001),thus indicating that at least some participants in ated their earnings in the task by acting dishonestly.
Using simple probability statistics, we then estimated the percentage of potentially dishonest individuals in the task, as well as the percentage of individuals who acted dishonestly to maximize their payoff.Here, we assumed that if unconditionally honest individuals roll a uniform distribution of numbers, then it is reasonable to take the number of people reporting a payoff of 0 to estimate the percentage of honest people in each number reported (Fischbacher & Föllmi-Heusi, 2013).As 12.86% reported a payoff of 0, we estimated the percentage of unconditionally honest participants to be 77.16%(i.e., 6 x 12.86%).Again, 77.16% is an upper limit for the number of honest participants.Next, we calculated the percentage of individuals acting to maximize their payoff in the task.This was calculated as follows: 18.75% of individuals in the sample reported having rolled a 5 and thus claimed the highest reward.Assuming that all participants who actually rolled a 5 would also report having rolled a 5, we estimated the maximal percentage of payoff maximizers to be at 2.5% (i.e., (18.75% − 1/6) × 6/5)), indicating that few individuals in the sample actually acted as such.The distribution of the reported claims between conditions is illustrated in Figure 3.
Next, before analyzing differences between the six conditions, we analyzed differences in claims between the unobserved vs. observed conditions.Here, we found that participants in the unobserved condition reported a signi cantly higher outcome (M = 28.57¢,SD = 16.31¢)than participants in the observed condition (M = 24.44¢,SD = 17.13¢); t(184.08)= -2.58,d = -0.25,95% CI [-0.44, -0.06], p =.011).Importantly, our results showed that the mean claim in the observed condition was not signi cantly different from the expected claim (25.00¢) [18] in the task (t(134) = -0.38,d = -0.03,95% CI [-0.20, 0.14], p =.707), whereas the mean claim in the unobserved condition was signi cantly higher than the expected claim (t(696) = 5.77), d = 0.22, 95% CI [0.14, 0.29], p <.001.).Focusing on the three conditions in which we could actually observe what participants rolled, we tested whether the reported results were signi cantly different from the actual number that participants rolled in their rst die roll in the experiment across conditions.We found no signi cant difference between what participants rolled in their rst die roll and what they subsequently claimed to have rolled (t(772.72)= 0.73, d = 0.04, 95% CI [-0.07, 0.16], p =.465).

Discussion
Study 3 reveals that being informed about the lack of deception in the experiment seems to impact participants' behavior, as it leads to lower levels of dishonest behavior than in the control.This nding is to some degree in line with prior research (Ortmann & Hertwig, 2002;Hertwig & Ortmann, 2008a, 2008b).
Additionally, while dishonest behavior occurred across all conditions, our results show that when participants were in a deceptive environment and were made aware that their performance in the die-roll task was being observed, such participants cheated signi cantly less-even comparable to the conditions where no deception was allowed, which to some degree aligns with previous ndings on the effects of cues of anonymity or being observed (i.e., visibility) on cheating behavior (Zhong et al., 2010;Ernest-Jones et al., 2011;Nettle et al., 2012;Pfattheicher & Keller, 2015;Abeler et al., 2019;Schild et al., 2019).

General Discussion
Deception occurs across organizations and societies in the world of today, for instance when essential information or choice conditions necessary for successful judgment and decision-making are being withheld.Yet, to what degree does individual honesty depend on the institutional setting of an organization or a culture allowing for or prohibiting deception?Here, we provided well-powered evidence suggesting that individuals engaging in moral decision-making within minimally deceptive environments do not always behave more dishonestly than they would in nondeceptive environments.
Particularly, in Studies 1-3, we examined how different types of contextual deception affect people's propensity to engage in dishonest behavior.In Study 1, we conducted in-person laboratory research and found that deception operationalized at the level of the laboratory policy does not affect repeated, downstream honest behavior.In Study 2, we investigated in online settings how deception operationalized at the level of the IRB in uences morality.Here, we introduced a one-shot die-roll task to avoid learning effects and one additional measure for cheating (a sender-receiver game), where the victim of a possible dishonest act was not the experimenter (or the lab/institution) but another participant, to gauge the effects of directed dishonesty when social preferences are present.We replicated the main pattern of ndings from the previous study.Then, in Study 3, we introduced an additional observed (vs.unobserved) treatment to test whether anonymity and cues of being observed in deceptive environments affect honest behavior.Here, we found evidence that informing participants about the nondeceptive policy of the context in which the study is conducted reduces subsequent dishonesty.Moreover, we also found some evidence that when participants were in a deceptive environment and were aware of being observed (vs.not), their dishonest behavior decreased.

Contributions and implications
Prior research in moral psychology has singled out deception as particularly harmful for moral behavior (Bok, 1978;Boles et al., 2000;Croson et al., 2003;Schweitzer et al., 2006;Schweitzer & Gibson, 2008).This work, however, has con ated deception with direct lying and self-serving intentions and has only looked at deception at the interpersonal level (for example, an individual lying to another individual).In the current work, we nd that deceptive environments do not directly negatively affect downstream moral behavior, in the form of dishonesty.Our results show that the relationship between deception and dishonesty is more complicated than previous interpretations have suggested.Being and acting in a deceptive environment does not necessarily breed dishonesty.
Our research contributes to the deception and behavioral ethics literature in numerous ways and with various applied potentials.First, we demonstrate and echo the "the importance of studying a broader range of deceptive behaviors" (Levine & Schweitzer, 2015).Deception is pervasive, yet we know surprisingly little about its consequences on behavior in general and even less about the effect of potentially deceptive environments on downstream, individual moral behavior.While research assumes that deception is harmful (Boles et al., 2000;Croson et al., 2003;Schweitzer et al., 2006;Schweitzer & Gibson, 2008), here we show that this assumption might not always hold, at least when it concerns moral behavior in the form of dishonesty.
Second, we nd select evidence that simply being informed about the lack of deception within an environment can have positive implications for downstream honest behavior.This might lead to a signi cant methodological implication, i.e., researchers need to be cautious regarding the type of information they share with participants.This nding might also prove useful within organizational contexts, where inter-organizational information that is transmitted to individuals and concerns that speci c organization matters for the ethical behavior (Elbaek & Mitkidis, 2023).Notably though, this nding should be interpreted with caution since it is based on an interaction effect in a single study.Yet, it might be an interesting starting point for more research in this realm.
Third, our research has an important methodological implication.Academics have debated for decades about the use of deception in experimental settings (Mills, 1976;Davis & Holt, 1993;Friedman et al., 1994;Bonetti, 1998;Hey, 1998;Jamison et al., 2008;Hertwig & Ortmann, 2008b;Krawczyk, 2019).That is, the use of experimental deception is controversial and associated with different schools of thought and particularly clustered within different disciplines (e.g., strict no-deception policies in experimental economics vs. more lenient views of its use in experimental social psychology).On the one hand, some researchers suggest that the use of deception, confederates, and cover stories is defendable as a last resort to not bias participants' responses and that there is a potential overall bene t in data validity and experimental control (Bonetti, 1998;Hilbig et al., 2021).Deception is justi ed "by the study's signi cant prospective scienti c, educational, or applied value and that effective nondeceptive alternative procedures are not feasible" (APA, 2002).On the contrary, other researchers argue that the use of deception should be avoided, as it violates moral academic conduct (Hertwig & Ortmann, 2008b), affects "participants' expectations, suspicions, and future behavior" (Jamison et al., 2008) and pollutes the participant pool (Friedman et al., 1994).Our results show that deception might harm internal validity if participants are suspicious of being monitored, and notably that this is a within-experiment effect (i.e., not tested in a subsequent study).We thus urge researchers to take this possibility into account when designing studies.Although, in principle, we believe that the best policy is no deception, we could value the insights that a different approach might offer, when realising that deception may be unavoidable (Hertwig & Ortmann, 2008b).Hence, we call for unambiguous and clear guidelines (Hersch, 2015;Krawczyk, 2019) that will make the trade-off of using deception easier to determine.Fourth, our results offer a selection of important implications and insights for applied psychological research.Our set-up had some distinct characteristics, such as, 1) there was a clear information asymmetry: the experimenter knows more than the participant about the given situation and procedure; and 2) participants usually seem willing to follow the instructions by trusting the experimenter.While our results can strictly only talk about deceptive environments within experimental research contexts, [19] we argue that these ndings might in fact generalize beyond such contexts.That is, we have a type of hierarchical situation, where leaders introduce tasks to followers and followers can subsequently lie to leaders.There are several occasions in which a comparable situation seems at place, such as in therapy/medicine, advertising, education, parenting, etc.First, in therapy, for instance, a physician might apply a placebo or might experience a situation in which a "white lie can be considered an ethical decision made in certain circumstances (e.g., facing a bitter truth) to protect the patient from predictable harm without personal motivation or self-interest."(Shali et al., 2020, p. 2).Yet, physicians also rely on patients to behave honestly in return, e.g., when reporting on their pain and their need for prescribed opioids.In the same vein, in advertising, companies partly use minimal deception techniques to in uence consumers' behavior and increase pro t.One such technique is called "bait and switch" where a company advertises a product at a low price.When a customer purchases the product, they nd out that it is out of stock or unavailable and are instead offered a similar, often more expensive product.Apparently, customers likely expect that advertising is not entirely truthful and they could deceive back by misrepresenting the experience of the purchased product or service (e.g., by writing a fake online review of a product or service).Third, in education a teacher might use a "mystery" or "surprise" element in a lesson to increase engagement and motivation among students.For example, a teacher might present a problem or question that appears to have no apparent solution, but then later reveal that there is a solution.The deception in this case is minimal, too.Students are not harmed by the surprise element, and the teacher is not using the deception to cause harm.Instead, the purpose of the technique is to increase student engagement and motivation.Still, a teacher relies on students to honestly doing their homework and exams (i.e., without the use of banned aids).Fourth, in parenting, a parent might deceive their children to either protect them or motivate them to behave well.For example, a parent can tell their child that a pet has gone to live on a farm, instead of telling them that the pet has died.The deception in this case is again minimal, as the child is not being necessarily harmed by the white lie, and the parent is not using the deception to cause harm.Instead, the purpose of the technique is to protect the child from di cult information that may be overwhelming for them to process at the moment.Nonetheless, parents and peers rely on children telling the truth (e.g., to whom they go when visiting a friend).
Ultimately, we nd that in deceptive environments, it is bene cial to introduce cues of visibility, as doing so bene cially impacts moral behavior in the form honesty, which does not seem to be necessary in nondeceptive environments.Adding to previous research on external enforcement and the probability of inspection (Teodorescu et al., 2021), these ndings have central policy implications and should lead to fruitful directions for policy-related research and applications.A cost (visibility cues)-bene t (decrease dishonesty) analysis might be bene cial and worth exploring in future research.

Limitations and future directions
Our studies have several limitations which pose an equal number of challenges for future research to study.One limitation is that our manipulation of whether experimental deception is allowed vs. not is confounded with whether this information is true or not.In this case, Study 1 took place in a lab that allows for deception, and in Studies 2 and 3 participants had accepted in past, when they registered in the recruitment platform, that they might take part in experiments using deception.Furthermore, we cannot rule out that the observed null ndings might be a result of a weak manipulation.For example, being made aware of the possibility of deception is not the same as observing or directly anticipating deception or having a past experience of it within a particular environment.We encourage future research to test different manipulations of deceptive environments and further examine the underlying mechanisms and boundary conditions of the effects of deceptive environments on moral behavior in general.For example, researchers should directly test whether there is a difference between deceptive individuals vs. deceptive environments on downstream honesty.In that case, it might be that interpersonal deception negatively affects (dis)honesty.Studies targeting this particular distinction would be helpful.
Relatedly, different types of deception or participants may result in different types of dishonest behavior.In this article, we focus on one speci c type of deception (i.e., at the laboratory or at the IRB level), which is an indirect exposure to deception, and a speci c type of dishonest behavior (i.e., cheating and lying for monetary gains).However, our results might not be generalizable to all types of deception, types of participants, or all types of dishonest behavior.For example, our deceptive environments might not capture features of other deceptive social contexts, where for example there is positive reinforcement of dishonest behavior or bribing.Future work could study alternating levels of deception within environments and its effects on moral behavior to explore the dynamic and relative nature of deception and its relation to morality.Additionally, prior research has distinguished between and classi ed among different types of participants, categorizing them into brazen and non-brazen (Weisel & Shalvi, 2015), heroes and villains (Goranson et al., 2022), cheaters and liars (Pascual-Ezama et al., 2020), or scholastic cheaters (Williams et al., 2010).A question that arises here is how would such typologies interact with the different types of settings?For example, would a brazen participant take an advantage of a deceptive environment more than a non-brazen participant?Finally, future work should look at the effect that different natures and frequencies of deception have on behavior.Here, we echo Hertwig & Ortmann (2008b, p. 225) that "the evaluation of our methodological standards and policy should be evidencebased." In terms of dishonesty in general, anonymity and visibility cues may in uence the decision to be honest or not.As these factors are crucial for organizations, we call for further research and replication studies on this matter.Although our experimental results offer initial insights into both the correlational and causal effect of deception on dishonest behavior, respectively, one should be cautious in extending these ndings to the real world due to possible contextual sensitivity.Thus, we call for real-world investigations on deception and morality that involve cross-cultural and diverse sampling methods combined with eld experiments within organizations.The latter entail creative ways of measuring (un)ethical behavior and resolving potential selection effects.

Conclusion
Transparency International (2021) considers honesty essential to interpersonal relationships and organizations.(Dis)honesty is a social phenomenon and should be studied within social contexts.In this article we study the effects of experimental deception on downstream dishonesty.Across three experiments, we provide evidence that the relationship between deceptive environments and morality is much more complicated than previously assumed and call for further debate and research on the topic.

Figures Figure 1
Figures

Table 2 .
| Predicting claims per condition moderated by observed/unobserved

Table 1
not available with this version.