Experiential values are underweighted in decisions involving symbolic options

Standard models of decision-making assume each option is associated with subjective value, regardless of whether this value is inferred from experience (experiential) or explicitly instructed probabilistic outcomes (symbolic). In this study, we present results that challenge the assumption of unified representation of experiential and symbolic value. Across nine experiments, we presented participants with hybrid decisions between experiential and symbolic options. Participants’ choices exhibited a pattern consistent with a systematic neglect of the experiential values. This normatively irrational decision strategy held after accounting for alternative explanations, and persisted even when it bore an economic cost. Overall, our results demonstrate that experiential and symbolic values are not symmetrically considered in hybrid decisions, suggesting they recruit different representational systems that may be assigned different priority levels in the decision process. These findings challenge the dominant models commonly used in value-based decision-making research. Standard decision models assume that all options' values are encoded on a common scale by a unique representation system. Across nine experiments, Garcia et al. provide evidence that challenges this assumption: participants treat experiential and symbolic options asymmetrically.

Standard models of decision-making assume each option is associated with subjective value, regardless of whether this value is inferred from experience (experiential) or explicitly instructed probabilistic outcomes (symbolic). In this study, we present results that challenge the assumption of unified representation of experiential and symbolic value. Across nine experiments, we presented participants with hybrid decisions between experiential and symbolic options. Participants' choices exhibited a pattern consistent with a systematic neglect of the experiential values. This normatively irrational decision strategy held after accounting for alternative explanations, and persisted even when it bore an economic cost. Overall, our results demonstrate that experiential and symbolic values are not symmetrically considered in hybrid decisions, suggesting they recruit different representational systems that may be assigned different priority levels in the decision process. These findings challenge the dominant models commonly used in value-based decision-making research.
Standard models of economic decision-making generally assume a two-step decision process, in which individuals identify and assign values to available options and ultimately pick the option with the highest subjective value 1,2 . The values attributed to individual options can derive from different sources. On the one hand, a priori neutral stimuliacquire positive or negative experiential values after association with past outcomes (rewards and punishments) 3,4 . On the other hand, the explicit description of an option's possible outcomes and their probabilities are combined to form a subjective expected value (EV) [5][6][7] . Such explicit descriptions may take many different forms, including written language (from simple vignettes to fully specified numerical variables), a symbolic code communicating the decision variables (payoffs and probability) in an unambiguous manner or a combination of the two 8 .
In the standard two-step model, the way option values are built (via experience or description) is only peripheral to the decision process itself, meaning that experiential and symbolic values converge to a central valuation and decision-making system 2,[9][10][11][12][13][14][15] . The core property of the central valuation system is supposed to be its capability of encoding subjective values-or utilities-for all available options in a given choice set (no matter how different the options are: common currency). Accordingly, its signals are, in principle, both necessary and sufficient to make decisions between any possible set of options. The central valuation system is not concerned with the upstream computations necessary to recognize and represent the available options or with the downstream computations necessary to select an action. Thereby, choices between experiential (E-options) and symbolic options (S-options) should present no particular challenge, because their values are translated into an internal common currency, allowing an unbiased comparison between these differently generated option values.The central valuation system hypothesis is indirectly supported by the fact that the neural correlates of experiential and symbolic values largely overlap in the so-called brain valuation system 16,17 .
However, several lines of evidence in behavioural decision-making research question the idea of a central valuation system. In fact, it is now quite well established that, when studied separately, experience-and description-based choices display different properties: a phenomenon Article https://doi.org/10.1038/s41562-022-01496-3 and loss. Because these ES 'hybrid' choices are the main focus of this article, we thereafter delineate three plausible hypotheses concerning the behavioural output of this phase.
First, assuming the subjective values of the E-and S-options are mapped into a common scale (common currency hypothesis), participants should make unbiased decisions in the ES phase. Accordingly, the probability of choosing, say, the E-option, will be jointly determined by the EV of the E-and the S-option (Fig. 1b, left). In other terms, for a given E-option, the inferred indifference point will precisely correspond to S-options with equal EV.
Alternatively, the possibility that subjective values are constructed and represented in a modality-specific way (representational gap hypothesis) entails that E-and S-options are not readily commensurable. This situation could lead to two possible scenarios. In one of them, participants make random choices in the ES phase. In the other scenario, participants could prioritize one of the two sources of information. Within this scenario, participants could resolve the tension between E-and S-options, basing their choices primarily on the explicit symbolic values provided by the lotteries. In other terms, participants would pick the lottery, when positive, and reject it when negative, as if the E-option values were neglected and regressed to zero (experiential value neglect; Fig. 1b, middle). In the other case, participants would present an overreliance on experiential values and would display the opposite pattern: accept or reject an E-option without considering the S-option value (symbolic value neglect; Fig. 1b, right). Crucially, the ES phase of our experiments allows us to tease apart these different scenarios by analysing the probability of choosing an E-option as a function of the S-option being presented. More precisely, taking each E-option separately and uncovering the S-option (value) at which preference shifts from the former to the latter provides us with an estimate of how much a participant values an E-option. Quantifying the relation between E-and S-options comes down to inferring indifference points (that is, when the probability of choosing one option over the other is 50%), which act as proxies of participant E-option values (Fig. 1b, insets).

First evidence for experiential value neglect
In the LE phase of the first experiment (n = 76), we presented pairs of E-options in an interleaved manner (that is, E-option pairs are distributed randomly in the sequence of trials) and we displayed only the outcome of the chosen option (partial feedback) (Fig. 2a, Experiment (Exp.) 1). Apart from the most difficult learning context (60/40), choice accuracy was above chance level for all E-option pairs, thus indicating that participants aimed at (and managed to) maximize EV (minimum difference: t (75) = 1.5, P = .13, d = 0.17, 95% confidence interval (95% CI) = 0.49, 0.6; maximum difference: t (75) = 10.98, P < 0.001, d = 1.25, 95% CI = (0.71, 0.8). Furthermore, accuracy was modulated by the difference in EV (that is, the decision value) of the E-option pair. Choice accuracy increased as a function of the decision value, thus indicating that participants' behaviour was sensitive to the specific EV of E-options involved in a given pair (70/30 pair: β = 0.12, Z = 3.90, P < 0.05, 95% CI = 0.06, 0.19; 80/20 pair: β = 0.13, Z = 4.11, P < 0.05, 95% CI = 0.07, 0.19; 90/10 pair: β = 0.21, Z = 6.46, P < 0.001, 95% CI = 0.14, 0.27). referred to as the description-experience gap [18][19][20] . This difference in the subjective valuation of experiential and symbolic options poses a direct, theoretical challenge to the idea of a central valuation system 21 .  This observation rather suggests the existence of modality-specific  valuation systems, relying on distinct cognitive representations, which  would hinder, if not impede, the comparison between experiential and  symbolic options. Strikingly, this key prediction has not been directly assessed, because studies usually consider separate sets of decision problems for experiential and symbolic options 22 . Thereby, to date, very little experimental evidence has formally assessed the commensurability of experiential and symbolic option values, or their mapping into central or different valuation systems 23,24 . This is particularly problematic considering that hybrid choices seem to be the norm rather than the exception in modern societies, where descriptive information is omnipresent. For example, everyday situations such as choosing between our favourite restaurant (experience) and a new one with good reviews (description) are prototypical of such a hybrid decision.
To fill this gap and challenge the commensurability of experiential and symbolic values, we designed a new behavioural protocol. The experiment started with a learning phase during which human participants repeatedly faced abstract cues paired with probabilistic outcomes. During the learning phase participants learned to associated outcome probabilities to the originally neutral cues via standard reinforcement learning . After this phase, participants were asked to make hybrid choices between the experiential options of the learning phase and described lotteries visualized as coloured pie charts (a standard way to represent value symbolically) 8 . When making hybrid choices, participants treated the two kinds of options asymmetrically and, specifically, neglected experiential values. This asymmetry was robust across seven experiments, where we controlled for many possible alternative explanations, such as insufficient learning, generalization issues or lack of incentives. Overall, the relative neglect of an option's value conditional on its source is consistent with the idea that different types of values-such as experiential and symbolic-may involve different representational systems, resulting in their incommensurability.

Results
We conducted a series of experiments structured in two main phases, one allowing the formation of subjective values from the experience of past outcomes and a second where these experiential options (E-options) were presented against options whose subjective values were described by symbolic means (S-options) (Fig. 1a). During the first learning (LE) phase, E-options were materialized by abstract shapes that provided no explicit information concerning the EV of the option. During the LE phase, E-option values could therefore only be inferred from the history of gains (+1 point) and losses (−1 point) associated to a specific cue. E-options were presented in four fixed pairs, each featuring an EV-maximizing and an EV-minimizing option. Subsequently, in the Experiential-Symbolic (ES) phase, participants were asked to make choices between the very same E-options of the previous phase and pie charts explicitly describing the associated probabilities of gain The LE phase consists of a two-armed bandit task with fixed (four or two in Exp. 4) pairs of abstract cues (E-options) and contained 120 trials. Right, successive screens of typical trials in the experiential-symbolic (ES) choice phase. The ES phase consists of binary choices between a lottery (standardly materialized as a pie chart) and a symbol previously presented in LE phase. In most experiments, the ES phase lasted 88 trials (8 E-options × 11 S-options). b, Three possible hypotheses on how participants could make choices in the ES phase. In each graph the probability of choosing the E-option is plotted against the value of the S-option. The insets represent the indifference points (where the curves cross 50%). The colour of the curves indicates the value of the E-option (lowest: light orange; highest: dark orange). Left, the default hypotheses according to which E-options and S-options are fully commensurable and therefore the curves cross 50% (indifference point) at exactly the value of the E-option. Middle, the experiential value neglect scenario according to which ES choices are determined (almost) uniquely by the value of the S-options. Right, the symbolic value neglect scenario, according to which ES choices are determined (almost) uniquely by the value of the E-options. c, The E-and S-option values. d, The experiments were structured as follows: they all started with an LE, during which participants made choices between symbols followed by an outcome. After the LE phase, participants were asked to make repeated choices between each E-option and several lotteries (see a and c). From Exp. 5 on, participants were also asked to make a choice between E-options that were not necessarily presented together. Finally, we assessed the stated probability (SP) 79 of winning for each symbol by asking participants to explicitly rate each E-option. Article https://doi.org/10.1038/s41562-022-01496-3 Regarding analysis of the ES phase, the probability of choosing an E-option in an ES decision was largely determined by the S-option EV and the preference shift abruptly occurred around S-option EV equal to zero (that is, P(+1) = P(−1) = 0.5). Despite clear proof of successful value learning and encoding during the LE phase, ES phase-choice pattern was clearly consistent with the experiential value neglect scenario (Fig. 2b,    To quantify and statistically compare the differences in preferences observed in the LE and the ES phase, we first estimated the theoretical subjective value of each E-option separately for the two choice types, proxied by its probability of winning a point: P(win) (the outcomes are fixed, so the EV of different options depends on their probabilities of winning). For the LE phase, we used a classical associative learning approach, where we assumed P(win) to be iteratively updated as a function of a prediction error-minimizing learning rule [25][26][27] . We were able to infer the P(win) attributed to each E-option at the end of the learning process by fitting this, rather parsimonious, standard model.
For the ES phase, subjective P(win) estimates were inferred using the following method: the probability of choosing a specific E-option over a S-option of various EVs was assumed to take the form of a logistic sigmoid function. We fitted those logistic functions to each E-option and individual and used them to extrapolate the indifference points indexing E-options' subjective P(win).
Finally, to compare the overall valuation of the E-options in the LE and the ES phases, we computed a measure of how well the subjective P(win) estimates from each phase matched the objective underlying probabilities, using slope estimates from linear regressions.
At this aggregate level, a slope equal to 1 corresponds to an unbiased representation of E-options' P(win), whereas a slope equal to 0 corresponds to random representations. In our data, the slopes estimated from the LE phase were significantly higher and closer to 1 compared with those estimated from ES choices (Exp. 1: t (75) = 6.53, P < 0.001, d = 1.03, 95% CI = 0.23, 0.49) (Fig. 2c, left). Thus, ES decision problems featured a specific neglect of E-option values, as if hybrid choices prioritized the value of the S-options over an unbiased comparison of experiential and symbolic values, thereby confirming the experiential value neglect hypothesis.
We ruled out a first trivial interpretation for this result by only including in the analyses participants that performed at 100% of correct response in catch trials (that is, trials involving choices between two S-options; see Methods), disseminated across the ES phase to ensure the participants' capacity to understand the symbolic representation of the probabilities.
In the following sections, we provide additional evidence in favour of the experiential neglect hypothesis by progressively ruling out alternative interpretations through additional measures and experiments.

Ruling out insufficient learning and forgetting
Although the experiential value neglect pattern observed in the ES phase is consistent with the idea that E-and S-options are not equally considered in the decision process, it is also consistent with a much more trivial hypothesis: insufficient learning. Despite reinforcement learning model fitting suggesting otherwise (see Fig. 2c, left), it is indeed possible that the neglect of E-option in the decision is caused by an imperfect and noisy E-option value representations at the end of the LE phase. To rule out this alternative interpretation, we devised a series of experiments where we changed the LE phase to improve learning while keeping the (average) option values the same. In a second experiment (Exp. 2; n = 71), we therefore presented decision problems as blocks (rather than interleaved as in Exp. 1), to improve performance and option identification by preventing the saturation of working memory 28 . In a third experiment (Exp. 3; n = 83), we also provided the outcome information for the unchosen option-a manipulation known for increasing accuracy 29,30 . Finally, on top of these variations, in a fourth experiment (Exp. 4; n = 88) we also reduced the number of decision problems of the LE phase to two, such that each decision problem was presented for twice as many trials as in Exp. 1-3, thereby reducing the uncertainty about the options' outcomes. These manipulations were successful in significantly increasing decision accuracy in the LE phase, while avoiding ceiling performance issues. Indeed, even in the easiest experiments, accuracy was still significantly modulated by the decision values (Exp. 1: 0.66 ± 0.01; Exp. 2: 0.71 ± 0.01, β = 0.05, t (314) = 2.28, P < 0.05, 95% CI = 0.008, 0.10; Exp. 3: 0.82 ± 0.01, β = 0.16, t (314) = 7.17, P < 0.001, 95% CI = 0.12, 0.21; Exp. 4: 0.79 ± 0.01; β = 0.13, t (314) = 5.8, P < 0.001, 95% CI = 0.08, 0.17). For instance, the accuracy in the most difficult decision problem (60/40 pair) was always lower compared with the easiest one (90/10 pair) (t (75)  Crucially, the remarkable increase in the LE phase accuracy of the new experiments (107-124% of Exp. 1) was not paralleled by detectable qualitative differences in ES phase-choice patterns (Fig. 2b). In other terms, the experiential value neglect pattern persisted despite the uncertainty concerning the E-options' values being considerably reduced (through blocked design, complete feedback and increasing the number of trials per decision problem).
To quantitatively characterize this claim, we estimated the subjective P(win) for each E-option separately for the LE and the ES phases and fitted a linear regression between the estimated subjective P(win) and their true values (as described above). Confirming the efficiency of our manipulations in increasing learning performance, the LE-inferred slopes increased significantly across experiments (Exp. The comparison between the first four experiments suggests that experiential value neglect is not a mere effect of insufficient learning. We indeed observe that an improved performance in the LE phase does not translate into a similar decrease of the experiential value neglect effect. We verified this was also true at the interindividual level by observing that dividing participants into high and low performance in the LE phase did not strongly affect ES performance (see Supplementary Materials and Supplementary Figs. 1 and 2).
However, even if they learned correctly, it is also theoretically possible that participants forgot the E-option values when entering the ES hybrid choice phase, although the fact that the ES phase directly succeeded the LE phases within a matter of seconds makes it improbable. To rule out this possibility, in Exp. 1-4, we asked participants to evaluate the E-options' P(win) just after the ES phase, by implementing a fully incentivized stated probability (SP) procedure 31 . More precisely, participants were explicitly asked to rate the probability of winning a point they attribute to an E-option by means of a numerical rating scale (Fig. 1d).
We then evaluated the quality of the E-option memory retention by regressing these stated probabilities against their true values. Note that because this elicitation happens after the ES phase, this SP-inferred slopes constitutes a lower bound of how well E-option values are learned and could be recovered during the ES phase. Yet the SP-inferred slopes were systematically higher than the ES-inferred slopes, and significantly so in Exp.

Ruling out a lack of generalization and training
The above-reported results from four experiments and three preference elicitation methods indicate the experiential value neglect phenomenon cannot be accounted for by insufficient learning or by mere forgetting. In this section, we rule out two additional alternative explanations. First, it should be noted that the ES phase involves a generalization process, because the E-options are extrapolated from the decision context where their subjective values are originally learned. It is therefore conceivable that the apparent experiential value neglect is spuriously created by a generalization problem. Second, in the previously reported experiments, participants went through the different phases (LE, ES and SP) only once; perhaps participants were somehow taken by surprise by the ES phase. In that case, presenting them different phases of the experiment twice will possibly allow them to improve their decisions by anticipating the ES phase 32 .
To control for generalization and practice, we ran two additional experiments. In Exp. 5 and Exp. 6 (n = 71 and n = 66), after the LE phase, we interleaved the ES choices with choices involving E-options presented in all possible combinations (referred to as Experiential-Experiential, or EE, choices; Methods and Fig. 6). Thus, in all cases except one, EE choices required being able to generalize their value to new decision problems. As in ES choices, we plotted the probability of choosing a given E-option as a function of the alternative E-option (Fig. 3b). To check whether experiential value neglect disappears if participants are given the opportunity to learn how to make ES decisions, Exp. 6 included a second session where we repeated all phases (LE, ES and SP). Of note, E-options in the second sessions were materialized by a new set of symbols.
EE choice curves revealed participants were capable of successfully extrapolating the value of the E-options to new decision problems involving other E-options. On the other side, the ES choices were consistent with experiential values neglect, thus replicating the previous experiments (of note, the LE phase of Exp. 5 and Exp. 6 presented the same characteristics as that of Exp. 3: complete feedback and block design) (Fig. 3a).
To formally assess the difference between EE and ES choices, we calculated for each participant their option-specific indifference points, following the same procedure used for ES choices, and we compared the inferred slopes across decision modalities. EE-inferred slopes were consistently significantly higher than ES slopes in both Exp. 5  This suggests that being exposed to the whole experiment twice, so giving participants the possibility to adjust their decision strategy, does not affect the main results.

Ruling out rational inattention
Analysis of choice behaviour in the ES shows that learned values of the E-options are largely neglected, as if participants were deciding on the basis of the value of the S-options only, despite performance in the LE, SP and EE choices indicating that E-option values are well learned and memorized. Neglecting experiential values seems, at least prima facie, suboptimal for the decision process, because taking into account all relevant information is considered a hallmark of normative behaviour 33,34 . However, if E-option information processing (for example, memory access or retrieval) is costly or if neglecting E-options does not hinder decision performance dramatically, it may become rational to do so [35][36][37] .
To evaluate this possibility, we simulated choices based on an extreme version of the experiential neglect rule: if an S-option has a positive EV, choose it, otherwise choose the E-option. These simulations show that, applied to the decision problems of the ES phase from Exp. 1-6, extreme experiential neglect still generates 77% of expected-value maximizing choices. This result is actually not as counterintuitive as it initially appears: by design, a positive lottery is the most advantageous option in ≥50% of the decision problems in which it appears, and the converse is true for the negative lotteries. These considerations suggest that, instead of representing an intrinsic cognitive limitation of value-based decision-making, the experiential value neglect is a rational heuristic strategy deployed by efficient (or lazy) decision-makers maximizing an accuracy-effort trade-off [38][39][40][41] .
To test this new interpretation of the results, we designed a new experiment (Exp. 7) in which we reorganized E-and S-option probabilities in a way that makes neglecting experiential values economically disadvantageous (Fig. 4a). In this new configuration, the narrower range of S-option values are nested within the broader E-option values, so that any given S-option is higher compared with the four negative E-options and lower compared with the four positive E-options. Such configuration guarantees that participants neglecting E-option values in the ES phase will exhibit a chance-level choice accuracy (50% of maximizing choices). Except for the modification of the lotteries, Exp. 6 presents the exact number of trials.
Despite this stronger economic incentive, the behavioural pattern in ES phase remained consistent with the experiential value neglect scenario (Fig. 4b). The significant difference between ES and EE slopes persisted in Exp. 7 (t 70 = 6.36, P < 0.001, d = 0.46, 95% CI = 0.03, 0.08), suggesting that, despite the reorganization of probabilities, we were still able to elicit more accurate E-option values from EE choices (Fig. 4f,g). As a consequence, compared with Exp. 6, the accuracy in the ES choices significantly dropped in Exp. 7 by approximately 20% (T(94.97) = 11.01, P < 0.001, d = 1.83, 95% CI = 0.16, 0.24) (Fig. 4c). Of note, the accuracy in the EE choices remained the same (Fig. 4d,   . Dots represent the empirical indifference points, the value of a lottery that correspond to a probability of choosing the symbol 50% of the time. Exp. 6.1 and Exp. 6.2 refer to the first and the second session, respectively. b, Average probability of choosing an E-option over another E-option during EE phase. The colour of the curves indicates the value of the E-option (lowest: light green; highest: dark green). Dots represent the empirical indifference points, the value of a lottery that corresponds to a probability of choosing the symbol 50% of the time. c, The panels represent for each symbol the inferred value (as expressed by the probability of winning; P(win)) as a function of the actual value. ES estimates are represented in orange and EE estimates in green. In the data boxes, the dark tone line represents the mean, mid-dark tone the s.e.m., light tone a 95% CI. The lines represent linear regression (dark tone) and the average s.e.m. (light tone). d, Comparison of individual inferred slopes obtained from linear fit (c) in two modalities (ES and EE in orange and green, respectively). The black lines represent mean and s.e.m. The coloured boxes represent 95% CI. The shaded area represents probability density functions. ***P < 0.001 two-sample t-test. n = 71, 66 participants for Exp. 5 and 6. Article https://doi.org/10.1038/s41562-022-01496-3 These findings indicate that experience values are neglected even when doing so involves an (economic) cost. Therefore, the results are consistent with the idea that the experiential value neglect reflects a hard-coded feature of hybrid choices between E-and S-options, rather than being strategically deployed by the relative lack of incentive in Exp. 1-6.

Reaction time analysis
Behaviours differ across the ES and the EE choices. In the ES phase, participants neglect the experiential option value and make choices only based on the symbolic option value, so, the S-option is chosen if it is positive, otherwise it is rejected (Fig. 5a). In contrast, EE choices are based on the retrieval from memory of the experiential values of both options. Thus, one decision process (ES choices) seems to involve the processing and representation of only one option value (the lottery), whereas the other process (EE choices) seems to involve the processing and the representation of two option values. We hypothesized that these different processes could translate into different reaction times between the two choice modalities.  (Fig. 5d). Overall, the reaction time analyses support the idea that choices based on the S-values of the lotteries required reduced cognitive processing compared with those involving retrieving from memory. Thus, E-values inferred from ES choices are consistent with the dual process model of Fig. 5a.

Discussion
Our results clearly indicate that the E-and S-option values are not treated symmetrically when making hybrid choices and speak against the idea of a central valuation system that encodes option values in

Fig. 5 | Hypothetical decision model and reaction time analyses. a, Schematic
representation of the decision process in the EE and the ES phases, respectively. The two processes differ in that in the former case (EE), the decision is based on retrieving the values of both options, whereas in the latter case (ES), under an extreme form of experiential value neglect, only the value of the lottery matters. b, Median reaction times across modalities. EE decisions are significantly longer than ES decisions (regardless of the choice taken in ES). When comparing when an S-option is chosen (ES s ) and when an E-option is chosen (ES e ), we also observed a significant difference. The black lines represent mean and s.e.m. The coloured boxes represent 95% CI. The shaded area represents probability density functions. c, Reaction time differences (ES e - ES s in orange; EE - ES s in green). In the data boxes, the dark tone line represents the mean, mid-dark tone the s.e.m. and the light tone a 95% CI. d, Reaction times as a function of whether the ES choices could only be explained by a total neglect of the experiential value (red) or whether they could only be explained by experiential values estimated from the LE phase (dark blue). In the data boxes, the dark tone line represents the mean, mid-dark tone the s.e.m. and light tone a 95% CI. *P < 0.05, **P < 0.01, ***P < 0.001 paired two-sample t-test. n = 137 for pooled Exp. 5 and 6. n = 455 for pooled Exp. 1-6.
Article https://doi.org/10.1038/s41562-022-01496-3 a common currency, regardless of the way they are built 2,9 . The key finding supporting this claim is provided by the analysis of hybrid decision problems between experiential and symbolic cues, where choices appeared to be made by largely neglecting value information acquired during the LE phase. Crucially, by running several experiments (Methods and Fig. 6) and including multiple control measures, we ruled out several alternative explanations for the experiential value neglect: this decision-making pattern is not due to insufficient learning, forgetting, generalization issues or a lack of incentive. Moreover, activating traces of E-option values in memory by interleaving ES and EE choices (Methods and Fig. 7) was not sufficient to facilitate their retrieval. Finally, reaction time analyses are consistent with different processing of experiential and symbolic values and with the idea of an additional cognitive cost associated with the memory retrieval of learned values. It seems that experiences and symbolic descriptions of possible outcomes ultimately generate value representations different enough to make them largely incommensurable and that the tension between the two is resolved by overweighting (or prioritizing) symbolic information. In the following paragraphs we try to provide plausible reasons why these values representations radically differ, why symbolic information is favoured in hybrid choices and which cognitive mechanisms could underlie the behavioural pattern observed.
Symbolic descriptions of lotteries in our task (and in general) involve separate information about at least two different features of outcomes: payoffs (that is, the amount of reward to be won or lost) and their probability 47 . Models of decision-making designed to explain behaviour in this kind of paradigm frequently assume that probability and payoffs are processed individually. For instance, in prospect theory and its extensions, different subjective weighting functions are supposed to apply to these variables 11,[48][49][50][51] . A separate representation of payoffs and probabilities is also assumed by models that do not suppose the calculation of a multiplicative expected utility 52 and by models supposing that decisions are underpinned by feature-by-feature comparisons 53-57 . On the contrary, experience-based choices, as instantiated by simple reinforcement learning tasks, are usually modelled as assuming that the decision-makerrepresents a unique numeric value for each state-action pair. The decision-maker can 'look' in this value matrix before making their choice and, once an outcome is obtained, it partially overwrites the 'cached' values previously stored in memory, so they approximate the average outcome. Option value representation is therefore structurally very different from that of description-based choices because the relevant features (payoffs and probabilities) are never explicitly represented as separate attributes of the outcomes. Furthermore, some authors even suggest that reinforcement-based choices may bypass the calculation of reward-based option-specific values and is underpinned by what is called direct policy learning [58][59][60] . Our results seem to reject an extremely orthodox interpretation of direct policy learning (accuracy in the LE phase was sensitive to the value difference between options and experiential values were successfully generalized to new combinations). It is nonetheless conceivable that-at least to some extent-reinforcement-based decisions involve a value-free (policy-based) component that can hardly be compared with the subjective value extracted from explicit payoffs and probabilities. Functional neuroimaging investigations of experiential and symbolic decision-making may also shed light on the debate about value representation across modalities. Although functional meta-analyses identified overlapping correlates of experiential and symbolic values 16,17 , the putative neural mechanisms of reinforcement-based and description-based decisions differ in many crucial respects. First of all, the most influential and consensual neural models of reinforcement-based learning and decision-making give a preponderant role to dopamine-induced plasticity of neural circuits [61][62][63] . More specifically dopamine-dependent plasticity is supposed to drive action selection by shaping the strength of the synapses between the frontal cortex and the basal ganglia 64,65 . Current neural models do not attribute a prominent role in description-based choices to dopamine-driven processes and the basal ganglia. Rather, they suppose the decision process is solved by cortical circuits, following an evidence-accumulation process similar to that observed for perceptual decisions [66][67][68] . Thus, structural differences in the neural mechanisms   Fig. 6 | Experiment parameters. The Exp. column refers to the experiment number. The Outcome (LE) column refers to the outcomes displayed during a single LE phase trial. The column can take two values: partial (only obtained outcome) or complete (both obtained and forgone outcomes). The Structure (LE) column refers to how the presentation of the options (or decision problems) was organized in the LE phase. 'Blocked' corresponds to the case in which all trials belonging to a given option pair are presented in a row. Otherwise, when options pairs are distributed randomly, the value is set to 'interleaved'. The Decision problems (LE) column refers to the number of option pairs presented in the LE phase. The Phases column refers to the specific phases present in a particular experiment. The EE phase was performed after learning with no feedback. SA, symbolic-ambiguous. The Sessions column provides the number of sessions, that is, how many times we repeated the sequence of phases with a different set of E-options. The Incentivization column refers to the incentivization method. In the 'single' condition, one outcome is randomly picked among the history of choices. The value of this outcome will determine the final reward. In contrast, in the 'Portfolio' condition, the final reward is constituted by the sum of all the outcomes obtained. The n column refers to the number of participants included in the experiment after exclusion of those displaying >100% correct response rate in the ES catch trials. F (%) is the percentage of female participants. Age reports the average age of the participants in a given experiment.
Article https://doi.org/10.1038/s41562-022-01496-3 of choices across modalities may represent a biologically grounded bases of the representational difference between experiential and symbolic values.
The representational tension of hybrid choices is solved by participants by neglecting the experiential values and basing their choices on the symbolic value. Several control analyses allowed us to formally exclude the possibility that this effect merely arises from insufficient knowledge of the experiential values. Why is the symbolic information preferred? We suggest two not-mutually exclusive explanations. One possibility is that experiential value estimates are perceived as less precise. Note that precision denotes here the uncertainty about the value estimate itself 45 . Indeed, assuming imperfect memory storage and retrieval, it is conceivable that experiential values are less precise compared to symbolic ones that can be perfectly calculated. According to this interpretation, participants would quasi-systemically prioritize the more precise (or less complex) source of information for their choices 44,45,69 . Another possibility is that participants prefer discarding experiential information so as not to incur the cost associated with the cost of memory retrieval 70,71 . Reaction times analysis was overall consistent with this idea, because choices involving the processing of the experiential values were generally slower compared to those involving symbolic ones, even if balanced in objective difficulty 46 . This latter interpretation leaves open the possibility that if one makes memory retrieval less costly, the behavioural pattern could be reversed (that is, we would witness symbolic value neglect). This could be possible, for example, after extensive training, once experience-based choices are routinized 72 or, conversely, by making symbolic information harder to decode. These are interesting possibilities to be explored by future studies.
Finally, we speculate on the possible cognitive mechanisms underlying the experiential value neglect phenomenon and we identify two plausible candidates. The first mechanism involves 'bottom-up' attentional processes. It is well documented that attentional focus biases evidence accumulation in value-based decision-making 73,74 . It is therefore conceivable that an attentional bias towards S-options may result in prioritizing described information and neglecting an experiential one. The second possible mechanism involves a 'top-down' heuristic process, according to which the calculation of individual option values is hijacked by deterministic decision rules 40 . Of note, even if we managed to demonstrate experiential value neglect in situations where it is disadvantageous (Exp. 7), it can nonetheless be argued that this decision rule is overall adaptive because it is computationally cheap and satisfactory in most situations (see Exp. [1][2][3][4][5][6].
To conclude, our results add to the collection of behavioural anomalies showing that value representations are inherently dependent on the way they are built, as is postulated by the 'construction of preference' framework 11,75,76 . More specifically, our findings pose serious challenges to the default assumption that values representations are shared across different decision-making modalities, traditionally referred to as experience-and description-based. The incommensurability between experiential and symbolic values results in participants behaving as if they were discarding value information acquired by experience, and consequently entails suboptimal decisions. These findings are worth exploring outside the experimental setting because many real-life decisions involve a tension between an experiential and a symbolic component.

Methods
In this section we present the methods, including those issued from two experiments (Exp. 8 and 9), which are only briefly mentioned in the main text. The results pertaining Exp. 8 and Exp. 9 are reported in the supplementary materials.

Participants and experiments
The research was carried out following the principles and guidelines for experiments including human participants provided in the declaration of Helsinki (1964, revised in 2013    have shown that online experiments allow the replication of classical behavioural results obtained in laboratory settings 77 . Furthermore, it has been noted that among several platforms, Prolific obtains the best scores in terms of data quality 78 . To assess participants' engagement in the different tasks and their understanding of probability representation, we inserted catch trials consisting of choices between two lotteries (S-options), with one of the two cues being obviously better in terms of EV maximization. In all analyses we only retained the participants displaying 100% of correct choices in these catch trials. Finally, 673 (307 females, 331 males, 35 'preferred not to say'; aged 30.45 ± 9.66 years) participants were included. Exps. 1-9 included the following numbers of participants: 76, 71, 83, 88, 71, 66, 71, 73 and 74 (see Fig. 6). Of note, none of the results presented in the main or supplemental text were affected by the exclusion of the participants.
To sustain motivation throughout the experiment, the tasks were economically incentivized. Specifically, in addition to a show-up fee, participants were initially endowed with £2.50, and according to their choices, they could reach a maximum of £5. The conversation rate was around 1 pt = 1 pence and they were told that all points won across the different phases were summed. The average final bonus was £4.20 ± 0.63, which was significantly higher than what they would have received on average for the following random choices (t (672) = 62.82, P < 0.001, d = 2.72, 95% CI = 4.15, 4.27).

Initial LE phase
Participants first performed a probabilistic instrumental learning task (LE). They were given written instructions explaining that the aim of the task was to maximize their payoff by seeking monetary rewards and avoiding monetary losses. From Exp. 1-5, participants performed only one learning session. Exp. 6, 7 and 8, for their part, included two learning sessions. For Exp. 1-7, each learning session contained four pairs of experiential cues (E-options), apart from Exp. 4, which contained two (but featured proportionally twice as many trials). Each pair was fixed, so that a given cue was always presented against the same other cue. Thus, within learning sessions, pairs of cues represented stable choice contexts. Within each pair, the two cues were associated to two outcomes; either winning a point (+1) or losing one (−1). The four (two in Exp. 4) cue pairs corresponded to four contexts of varying difficulty, indexed by the difference in the probability of winning a point between the two cues. On each trial, one pair was randomly presented with one cue on the right and the other on the left side of the screen. Participants were required to select, without a time limit, one of the two cues by left-clicking. After the choice, the selected cue was highlighted with a black border while a transition effect was activated. The transition effect lasted approximately 1,000 ms and revealed the outcome of the choice. The outcome was then displayed during approximately 1,500 ms. In Exp. 1, 2, 3, 5, 6 and 7, the four pairs of cues were presented 30 times each, for a total of 120 trials within sessions. In Exp. 4, the two pairs were presented 60 times each, to maintain an identical number of trials. In Exp. 1, pairs of cues were presented in an interleaved manner, meaning they were distributed randomly across the 120 trials. From Exp. 2-7, pairs were presented in a blocked manner, meaning they were stacked in sequences of 30 choices.
Regarding feedback, there were two settings: partial and complete. A partial feedback setting implied that only the outcome of the chosen option (or cue) was displayed, whereas complete feedback means that both outcomes were displayed, regardless of the choice. Exp. 1 and 2 involved partial feedback. From Exp. 3 on, feedback was set to complete.

Post-learning value elicitation
Choosing between E-and S-options (ES choices). This phase is present in all the experiments.
After the LE phase, E-options were presented against symbolic cues (S-options). S-options were implemented as pie charts, where green indicates the probability of winning a point and red indicates the probability of losing a point. Each E-option 8 involved in the LE phase was presented against 11 S-options (for a total of 88 trials), with the probability of winning (and respectively losing) a point ranging from 0% to 100%, with a 10% step. On each trial, one pair was randomly presented with one cue on the right and left side of the screen. Participants were required to select, without a time limit, one of the two cues by left-clicking. After the choice, the selected cue was highlighted with a black border and the transition to the next trial lasted approximately 1,000 ms. No feedback was presented during the ES phase. Participants were informed about their earnings only at the end.
Although the outcome was not displayed, participants were told this phase was still incentivized, such that choice accuracy affected their bonus compensation. ES choices were present in all the experiments.

Choosing between E-options (EE choices).
This phase is present in Exp. 5-9. After the LE phase, each E-option was presented against other E-options. With eight cues presented in the LE phase, it follows that each E-option was presented against the other seven E-options, so that this phase contained 56 trials. EE choices were presented in the same time as the ES choices, because we wanted to avoid having them differ in terms of time elapsed since the LE phase. Thus, technically the EE and the ES phases overlap.
For each trial, one pair was randomly presented with one cue on the right and left sides of the screen. Participants were required to select, without a time limit, one of the two cues by left-clicking. After the choice, the selected cue was highlighted with a black border and the transition to the next trial lasted approximately 1,000 ms. The transition effect lasted approximately 1,000 ms and left a place for the next trial. No feedback was presented.
Although the outcome was not displayed, participants were told that they could still win (and lose) points during this phase and that this phase was still incentivized, such that choice accuracy affected their compensation.

Temporal organization of ES and EE choices.
In Exp. 5-9, EE and ES choices were organized as follows (Fig. 7a). First, to alleviate the cost of retrieving the options, choices concerning one given option (abstract stimulus) were presented in series (or in 'block'). By doing so, the value of a given option could be retrieved at the beginning of the block and used in the subsequent trials. The order of option blocks was of course counterbalanced across subjects. Within each block (Fig. 7b), EE and ES choices were interleaved and their order was randomized.

SP assessment
In all experiments, for each E-option previously faced in the LE phase participants were asked the following question: 'What are the odds this symbol gives a +1?' They had to provide their rating on a scale from 0% to 100%, with a 5% step.
Answers were incentivized via a matching probability procedure that is based on the Matching Probability Mechanism 79 . More precisely, participant chose a probability (P) for the presented E-option. A number (r) is then randomly drawn in the interval [0 1]. If P > r, the outcome of the choice was obtained using the E-option probability of winning and losing a point (as if the E-option was chosen in the LE phase for instance). Otherwise, if P < r, the participant has r (%) chance of winning a point, and respectively 1 − r (%) chance of losing a point.
In other words, the higher the response (P) of the participant, the higher the chances were that the outcome would be determined by the E-option. Conversely, the lower the response (P), the higher the chances were that the outcome would be determined by the random lottery number (r).

Ambiguity assessment
Preference towards ambiguous lotteries was assessed only in Exp. 8. After the LE phase, E-options and S-options were presented against an ambiguous cue. This ambiguous cue was represented by a greyed pie chart with a question mark on top. Consequently, it was represented similarly to S-options, that is, as a lottery that was 100% ambiguous in the sense that it conveys no a priori information on the probabilities of gains or losses ( Supplementary Fig. 7). Each E-option and S-option 8 was presented against this ambiguous cue twice, resulting in a total of 32 trials. For each trial, one pair was randomly presented with one cue on the right side and one on the left side of the screen. Participants were required to select, without a time limit, one of the two cues by left-clicking. After the choice, the selected cue was highlighted with a black border while a transition effect was activated. The transition effect lasted approximately 1,000 ms and left room for the next trial. No feedback was presented.
Although the outcome was not displayed, participants were told that they still could win (and lose) points during this phase and that correct choices (that is, choices maximizing EV) and wrong choices would thus affect their compensation.

Incentivization assessment
From Exp. 1-8, the bonus given to participants was either decremented or incremented at each trial, depending on the outcome (−1 = +£0.01 or −1pt = −£0.01). We refer to this method as 'portfolio' incentivization. Exp. 9, in contrast, included a 'single' choice incentivization method, which consisted of selecting a random outcome within the history of choices, for each phase 42 . More precisely, participants were presented with the randomly selected outcome and its pound equivalent (+1 pt = +£0.62 or −1pt = −£0.62) at the end of each choice-based phase (LE/ES-EE presented twice, four phases in total). Of note, the initial endowment of £2.50 was maintained. Given there were four occasions to either win or lose £0.62, the maximum bonus was £4.98, whereas the minimum was £0.02.

Statistical and computational modelling
Inferential statistics. All t-tests are two tailed, and were realized using Python v.3.9 and the pairwise_ttests function from the pingouin library. Bonferroni's corrections were applied systematically. Linear regressions, and linear mixed models, were realized in Python, using the statmodels v.0.13.2 module. All confidence intervals (for t-tests and regressions) are 2.5% and 97.5%. For the relevant statistical tests we report: the student t-values (t (dof) ), the regression coefficient (β), the z-values for linear mixed models (Z), and Cohen's d (d).

E-option probabilities inference in ES and EE phases.
To infer a probability estimate (or indifference point) for each E-option from EE and ES choices we proceeded as follows. In the ES and the EE phases, an E-option was assessed relative to other cues (either S-options or other E-options). In the ES phase, 11 S-options were presented against each E-option. In the EE phase, seven E-options were presented against each E-option. Choosing the E-option that was being assessed is always coded as 1, whereas choosing the cue presented against (an S-option in ES or an E-option in EE) is always coded as 0.
We note c t i ∈ {0, 1} the choice of a participant i at trial t. Thus, for each E-option j we obtain a vector of choices C j with n = 11 in the ES phase and n = 7 in the EE phase. We then fit the following logistic function: 80 With β i > 0 (which controls the slope of the function) being a free parameter unique to each individual i and λ j i ∈ [0, 1] (the function midpoint) being a free parameter that is estimated for each E-option j and individual i. The indifference point λ j i represents here the probability where a preference shift (from one cue to another) occurs, and is thus a subjective probability (or value) estimate for the E-option j and participant i. Both parameters were estimated through a minimum negative log-likelihood estimation, using matlab's fmincon function.

Inferring E-option values estimated in the LE phase.
To infer E-option values in the LE phase, we fitted a reinforcement learning model (or Q-learning model) to our data 26 .
The model treats each pair of cues as a state s. After a choice, each cue subjective probability of winning a point (P(win)) was incrementally updated with the following Rescorla-Wagner rule: Where α is the learning rate (which controls to what extent new information overrides previous information) for the chosen cue c and the unchosen cue u. The associated prediction errors δ c and δ u are computed as follows: where R c and R u are the outcomes displayed for both chosen and unchosen cues. R x took a value of 1, when the outcome was +1 point, and 0 otherwise. Initial estimates of p win were set at 0.5 for all options. Please note that for Exp. 1 and 2, where only R c was displayed (partial feedback setting) only P(win)(s,c) was updated. Decision was modelled using a softmax function, where the actual probability of choosing a cue a when presented against a cue b was calculated as follows: With β > 0 being the temperature parameter that implements choice stochasticity. As β decreases, the events of choosing a or b tend to become equi-probable. As β increases, the difference between P(win) (s,a) and P(win) (s,b) is amplified, and the choice becomes more and more deterministic (until the function almost acts as an argmax policy).
Model fitting. Learning rate and temperature parameters (here denoted θ) involved in the reinforcement learning model were estimated by finding values that minimized the negative logarithm of the posterior probability over the free parameters (−log (P (θ|D)), which was computed as follows: −log (P (θ|D)) ∝ −log (P (D|θ)) − log (P (θ)) Where P(D|θ) is likelihood of the data (that is, the observed choices during the LE phase) given certain parameter values and P(θ) is the prior probability of those parameter values.
The prior probability distribution over the learning rates was assumed as beta distributed and quasi-uniform (betapdf (1.1, 1.1)). The softmax temperature was, for its part, assumed to be gamma distributed (gampdf (1.2, 5)).
The optimization procedure was again performed using Matlab's fmincon function.

Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.

n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted

Software and code
Policy information about availability of computer code

Data collection
The different experiments were conducted on a website programmed in Javascript ES6, HTML 5, CSS 3 (client-side) and PHP 8.1 (server-side). The code for the task is available here: https://github.com/bsgarcia/RetrieveAndCompare. A simple version of the task can be tested here: https://human-rl.scicog.fr/RandCTesting.

Data analysis
The analysis was performed using Matlab R2021a. The code is available here: https://github.com/bsgarcia/RetrieveAndCompareAnalysis. For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information.

Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A description of any restrictions on data availability -For clinical datasets or third party data, please ensure that the statement adheres to our policy The data for the analysis is available in the code repository: https://github.com/bsgarcia/RetrieveAndCompareAnalysis nature portfolio | reporting summary

March 2021
Authentication Describe the authentication procedures for each cell line used OR declare that none of the cell lines used were authenticated.

Mycoplasma contamination
Confirm that all cell lines tested negative for mycoplasma contamination OR describe the results of the testing for mycoplasma contamination OR declare that the cell lines were not tested for mycoplasma contamination.

Commonly misidentified lines (See ICLAC register)
Name any commonly misidentified cell lines used in the study and provide a rationale for their use.

Palaeontology and Archaeology Specimen provenance
Provide provenance information for specimens and describe permits that were obtained for the work (including the name of the issuing authority, the date of issue, and any identifying information). Permits should encompass collection and, where applicable, export.

Specimen deposition
Indicate where the specimens have been deposited to permit free access by other researchers.

Dating methods
If new dates are provided, describe how they were obtained (e.g. collection, storage, sample pretreatment and measurement), where they were obtained (i.e. lab name), the calibration program and the protocol for quality assurance OR state that no new dates are provided.
Tick this box to confirm that the raw and calibrated dates are available in the paper or in Supplementary Information.

Ethics oversight
Identify the organization(s) that approved or provided guidance on the study protocol, OR state that no ethical approval or guidance was required and explain why not.
Note that full information on the approval of the study protocol must also be provided in the manuscript.

Animals and other organisms
Policy information about studies involving animals; ARRIVE guidelines recommended for reporting animal research

Laboratory animals
For laboratory animals, report species, strain, sex and age OR state that the study did not involve laboratory animals.

Wild animals
Provide details on animals observed in or captured in the field; report species, sex and age where possible.

Ethics oversight
Identify the organization(s) that approved or provided guidance on the study protocol, OR state that no ethical approval or guidance was required and explain why not.
Note that full information on the approval of the study protocol must also be provided in the manuscript.

Human research participants
Policy information about studies involving human research participants Population characteristics See above.

Recruitment
Participants were randomly sampled from the prolific website. One of the inherent disadvantages of online behavioral research is that participants are not in a controlled environment, such that they can get distracted, affecting data quality. That is why we controlled atentional factors by inserting catch trials (see above). The fact that participants can quit the experiment at anytime might contribute to self-selection biases (e.g. if they perform poorly they might drop out of frustration, resulting in a a final sample where performance are higher than it should). Of note, we did not observe any specific self-selection bias.

Ethics oversight
The INSERM Ethical Committee approved the study and participants provided written informed consent prior to their inclusion.
Note that full information on the approval of the study protocol must also be provided in the manuscript.

Clinical data
Policy information about clinical studies All manuscripts should comply with the ICMJE guidelines for publication of clinical research and a completed CONSORT checklist must be included with all submissions.
Clinical trial registration Provide the trial registration number from ClinicalTrials.gov or an equivalent agency.

Study protocol
Note where the full trial protocol can be accessed OR if not available, explain why.

nature portfolio | reporting summary
March 2021

Data collection
Describe the settings and locales of data collection, noting the time periods of recruitment and data collection.

Outcomes
Describe how you pre-defined primary and secondary outcome measures and how you assessed these measures.

Dual use research of concern
Policy information about dual use research of concern Hazards Could the accidental, deliberate or reckless misuse of agents or technologies generated in the work, or the application of information presented in the manuscript, pose a threat to:

Experiments of concern
Does the work involve any of these experiments of concern: No Yes Confirm that both raw and final processed data have been deposited in a public database such as GEO.
Confirm that you have deposited or provided access to graph files (e.g. BED files) for the called peaks.

Data access links
May remain private before publication.
For "Initial submission" or "Revised version" documents, provide reviewer access links. For your "Final submission" document, provide a link to the deposited data.

Files in database submission
Provide a list of all files available in the database submission.
Genome browser session (e.g. UCSC) Provide a link to an anonymized genome browser session for "Initial submission" and "Revised version" documents only, to enable peer review. Write "no longer applicable" for "Final submission" documents.

Methodology Replicates
Describe the experimental replicates, specifying number, type and replicate agreement.

Sequencing depth
Describe the sequencing depth for each experiment, providing the total number of reads, uniquely mapped reads, length of reads and whether they were paired-or single-end.

Antibodies
Describe the antibodies used for the ChIP-seq experiments; as applicable, provide supplier name, catalog number, clone name, and lot number.
Peak calling parameters Specify the command line program and parameters used for read mapping and peak calling, including the ChIP, control and index files used.

Data quality
Describe the methods used to ensure data quality in full detail, including how many peaks are at FDR 5% and above 5-fold enrichment.
nature portfolio | reporting summary

March 2021
Software Describe the software used to collect and analyze the ChIP-seq data. For custom code that has been deposited into a community repository, provide accession details.

Flow Cytometry
Plots Confirm that: The axis labels state the marker and fluorochrome used (e.g. CD4-FITC).
The axis scales are clearly visible. Include numbers along axes only for bottom left plot of group (a 'group' is an analysis of identical markers).
All plots are contour plots with outliers or pseudocolor plots.
A numerical value for number of cells or percentage (with statistics) is provided.

Methodology Sample preparation
Describe the sample preparation, detailing the biological source of the cells and any tissue processing steps used.

Instrument
Identify the instrument used for data collection, specifying make and model number.

Software
Describe the software used to collect and analyze the flow cytometry data. For custom code that has been deposited into a community repository, provide accession details.

Cell population abundance
Describe the abundance of the relevant cell populations within post-sort fractions, providing details on the purity of the samples and how it was determined.

Gating strategy
Describe the gating strategy used for all relevant experiments, specifying the preliminary FSC/SSC gates of the starting cell population, indicating where boundaries between "positive" and "negative" staining cell populations are defined.
Tick this box to confirm that a figure exemplifying the gating strategy is provided in the Supplementary

Area of acquisition
State whether a whole brain scan was used OR define the area of acquisition, describing how the region was determined.

Normalization
If data were normalized/standardized, describe the approach(es): specify linear or non-linear and define image types used for transformation OR indicate that data were not normalized and explain rationale for lack of normalization.

Normalization template
Describe the template used for normalization/transformation, specifying subject space or group standardized space (e.g. original Talairach, MNI305, ICBM152) OR indicate that the data were not normalized.