We set out to examine the factors affecting advice takers’ choice between human and algorithmic advisors. Whereas most previous studies focused on factors related to the advisor's competencies (e.g., trustworthiness) and advice content (e.g., accuracy), here we focused on an alternate factor that may affect advice takers' choice: responsibility sharing, i.e., offloading to the advisor some of the responsibility associated with the decision’s potentially negative consequences. Participants were required to make a decision in the Medical and Financial domains, and were provided with a choice between a human and an algorithmic advisor. In addition, some participants received a No-Responsibility disclaimer, impeding their ability to share responsibility with their chosen advisor. We found evidence that responsibility sharing is applicable to human advisors more than to algorithmic advisors, and that human advisors’ perceived responsibility plays an important role in the choice between human and algorithmic advisors. Specifically, our findings show that human advisors' perceived responsibility was higher than that of algorithmic advisors. Moreover, the perceived responsibility of human advisors was more strongly influenced by the No-Responsibility disclaimer. Finally, the choice of advisors was affected by human advisors' perceived responsibility. Algorithmic advisors’ perceived responsibility too affected the choice of advisor, but this effect was relatively small. Generally speaking, the effects mentioned above were consistent for both the Medical and Financial domains, indicating the domain-general nature of advice takers' inclination to offload responsibility by turning to a human advisor.
Theoretical Contribution
The primary theoretical contribution of this study is in demonstrating that responsibility attribution considerations affect advice takers’ choice of human/algorithmic advisors even prior to reaching a decision and before the outcome is known. Previous studies suggest that decision makers attribute responsibility once an outcome is revealed and the cause of the outcome is sought20–22. As such, responsibility attribution seeks to form a direct causal link between the outcome and its source, and the literature distinguishes between attribution to the action and attribution to the agent36. Decision makers’ responsibility attribution is sensitive to factors such as controllability, stability, and autonomy37. These factors are linked to a broader responsibility perception, which is not directly dependent on a past outcome. Hence, when an advice taker selects a source of advice, broader perceptions regarding the advice giver’s responsibility may affect the former’s considerations regarding the possibility of attributing responsibility to the advice giver (in the case of an undesirable outcome). When comparing advice takers’ willingness to accept advice from either a human or an algorithmic advisor, previous studies have shown that responsibility attribution is more applicable to human advice givers once the recommendation’s outcome becomes known6,23,25,26. Our study reveals that beyond the attribution associated with the action and its outcome, broader responsibility considerations associated with the advice giver, namely the possibility of offloading responsibility to the advisor in the case of s negative outcome, may affect an advice taker’s choice between a human and an algorithmic advisor. Next, we discuss our study’s specific findings and how they inform extant knowledge in the field.
Our finding that less responsibility is attributed to algorithmic advisors than to human advisors is in line with previous studies. For example, studies have shown that algorithms are perceived as less autonomous, having less control over their actions, and therefore there is a smaller likelihood that responsibility for wrongdoing will be attributed to them26,38. Our finding further extends these results by showing that algorithmic advisors’ perceived responsibility is less sensitive to explicit manipulations involving the future possibility of attributing responsibility to the advisor (i.e., the No-Responsibility disclaimer) than the effect on human advisors’ perceived responsibility.
Interestingly, previous studies provide evidence for cases in which responsibility for an outcome was attributed to algorithmic agents, sometimes even more than the responsibility attributed to human agents. For example, one study found that responsibility for accidents was equally attributed to human and algorithmic drivers, and that algorithmic drivers were praised even more than humans for rescuing their passenger36. Another study found that advice takers feel less regret when making bad decisions regarding stock investments when following the advice of robo-advisors23. In different contexts, participants were found to blame algorithm mediators and advisors for their own immoral behavior39–41. Such contradictory findings suggest that in some cases the need to attribute responsibility to a source that is very different from the human advice taker (i.e., an algorithmic advisor), a process known as defensive attribution28,29, is so strong that it overshadows the effects of the stability, autonomy, and controllability considerations discussed above, leading people to blame algorithmic advisors28,29. Whereas responsibility attribution that is linked to a decision’s outcome is susceptible to defensive attribution, broader agent-based perceived responsibility may be less sensitive to such defensive considerations, as it is independent of a specific outcome. An interesting direction for future studies regarding the choice of advisors is to explore the extent to which the likelihood of a negative outcome shapes the salience of responsibility sharing considerations. We conjecture that when negative outcomes are more likely, responsibility sharing will be more pronounced, whereas positive outcomes may reduce the need for responsibility sharing.
Our results showed that advice takers’ perceived responsibility regarding a human advisor affect the likelihood of preferring a human advisor over an algorithmic one. This finding highlights the importance of factors that are not directly linked to promoting the decision's outcome in shaping advisor preferences. Namely, responsibility will not change the specific outcome of the decision (e.g., monetary loss or gain, health deterioration or improvement), yet it affects the choice of advisor. One line of explanation is that responsibility sharing can alleviate psychological consequences, such as regret or reduced self-evaluation11,28,42. This finding informs the discussion regarding the broader contextual conditions in which algorithmic advice is provided and the varying psychological and physical needs of potential advice takers in these different contexts30. Morewedge (2022) argues that, similar to the effect of defensive attribution, in decisions that are identity-threatening (i.e. the outcome may reflect badly on one’s character), advice takers are less likely to prefer algorithmic advice. The level of identity threat may be affected by the decision domain, for example decisions related to employee hiring or to character judgements seem to reflect highly on the decision maker’s abilities and therefore lead to a preference for a human advisor43. In contrast, when decisions concern the estimation of an objective quantity or a demanding calculation, where making a mistake poses less of a threat to the decision maker’s identity, people tend to prefer algorithmic advisors15,44,45. Identity threat may also vary by the individual's traits and perceptions. For example, people’s sense of power with regard to the algorithmic advisor, which modulates their sense of threat, shapes their need to share responsibility27. Hence, individuals with an elevated sense of self-responsibility are more likely to prefer algorithmic advice46. Our findings provide support for the role of perceived responsibility in shaping the direct choice between human and algorithmic advisors. Future works may further examine the extent to which reducing the threat to the decision maker’s identity affects the choice between algorithmic and human advisors.
Applied Contribution
In addition to its theoretical contribution, our research has practical applications as well. As algorithmic advisors become increasingly common, a growing body of evidence is showing that they can outperform human advisors in various domains. For example, algorithmic advisors were found to provide more accurate estimations regarding which employee will succeed in his job47, which academic candidate will succeed in her studies48,49 ,or which stock will perform better50. Nonetheless, in many situations people still tend to prefer the advice of a human advisor, which means that in many cases people receive less accurate advice. Better understanding of the reasons for preferring human over algorithmic advisors can help find ways of promoting algorithmic advisors for tasks where algorithms are likely to outperform humans, thus contributing to general well-being.
Previous research offered insights regarding people’s perceptions of algorithmic advisors’ abilities, as well as regarding the conditions in which people tend to trust algorithmic advisors to provide them with beneficial advice5,15,51,52. Our results indicate that other parameters, namely perceived responsibility, should also be considered. For example, in situations where there is an interest in promoting the adoption of algorithmic advice, foregrounding the name of the algorithm’s developer (an individual or a company) may prove an effective tool for applying responsibility sharing to the algorithmic advisor, thus increasing the willingness to embrace the algorithm’s advice.
Notwithstanding the earlier recommendations, there may be other situations where algorithms do not have a clear advantage over human advisors, where promoting human advisors may have societal benefits. Designers of algorithmic and human recommendation systems should take responsibility sharing perceptions into account, and perhaps include a human-in-the-loop as a means of mitigating algorithm aversion. Studies predict that about 47% of jobs in the United States are at high risk of becoming automated53, while others predict that about 14% of jobs in OECD countries participating in the PIAAC have the same risk54. Either way, work environments are gradually becoming computerized, and more and more professions are at risk of being replaced by artificial intelligence. As people seek ways of remaining relevant in the workforce, the notion of responsibility sharing may become powerful for convincing the public to rely on human advice. More broadly, highlighting factors that satisfy psychological needs, such as responsibility sharing, reciprocity, and long-term relationships, may prove effective in persuading people to prefer interactions with humans.
Limitations
Our study has several limitations that should be noted. First, participants were presented with a hypothetical scenario (being part of a Board of Directors facing an investment decision or part of a committee for clinical trials) that was not directly linked to their everyday life. Whereas this practice is quite common in studies on advice taking and has yielded consistent findings2,19,24, it has limited ecological validity and may not fully capture relevant cognitive processes (e.g., participants did not pay a price for the consequences of their decision). In all likelihood, if advice takers needed to face similar situations in real life, the potential risks associated with a decision would have increased the need to offload responsibility, and thus the results of this study represent the lower bound of the true effect of responsibility sharing.
It is also possible that our experimental design indirectly affected participants' choices. Participants were first asked to provide their ratings of the (algorithmic and human) advisors’ trustworthiness and responsibility, and only then did they select their advisor of choice. It is possible that the rating questions prompted participants to consider responsibility, and thus affected the decision regarding the preferred advisor. It should be noted this experimental design allowed us to include in our statistical analysis the constructs of the advisor’s perceived trustworthiness and responsibility as antecedents of the preferred advisor. Given that our results regarding perceived responsibility were consistent across domains, since we found that only human advisors’ responsibility ratings were affected by the disclaimer manipulation, and given that our findings are largely in line with the results of previous studies on responsibility attribution, we believe that the effect of our experimental design (i.e., asking participants to first rate the advisors and only then make the choice) was negligible. We propose that future studies explore the effects of responsibility sharing on the choice of advisors through alternative experimental designs.