This study identifies factors correlating with clinicians’ trust in AI and perception of AI-driven clinical decision-making. According to this study, the perception of AI reduced workload and AI-driven clinical decision-making correlate with trust in AI positively, while the perception of risk has no significant effect on trust in AI. Moreover, the perception of AI-reduced workload correlates with AI-driven clinical decision-making positively, while the perception of risk correlates with AI-driven clinical decision-making negatively. The results of the PLS-SEM analysis, including the control variables of clinical experience, suggest that clinical experience, as a control variable, does not impact clinicians' trust in AI or the AI-driven decision-making process. This finding aligns with prior research in blockchain adoption, which found a lack of correlation between years of work experience and trust and decision-making (Li et al., 2022).
5.1 Trust in AI
Our analysis revealed a negative relationship between trust and workload, consistent with prior research (Akash et al., 2020; de Visser and Parasuraman, 2011; Dubois and Le Ny, 2020) and (Bulińska-Stangrecka and Bagieńska, 2019). The results are in line with the Social Exchange Theory (Cook and Emerson, 1987), which posits that individuals develop a sense of obligation to reciprocate positive treatment from their social exchange partners (e.g., the organization) and that trust is a crucial factor in the development and maintenance of social exchange relationships (Blau, 1964). Following the Social Exchange Theory, an empirical analysis of a telecommunication company survey suggested that workload reduction and sharing are positively related to interpersonal trust in organizations (Bulińska-Stangrecka and Bagieńska, 2019).
Our study found no significant association between risk and trust, thereby failing to support the Risk Management theory (Earle, 2010). According to this theory, when individuals perceive high levels of risk, they may become more cautious and less likely to trust others. However, the relationship between trust and risk seems to differ in the context of human-machine or human-technology interaction. For example, a study in the context of autonomous vehicles stated that at a high level of perceived risk, detailed explanations about the technology and no explanations lead to the lowest and highest values in trust, respectively. However, these effects reverse at low levels of the perceived risk (Ha et al., 2020). Another study observed that during the initial interaction with automation systems, drivers’ perceived risk is primarily based on their presumptions (expectations) which may alter after using the car. The participants in this study reported the highest level of trust, perceived automation reliability, and the lowest level of perceived risk when presented with information about a highly reliable system and when driving in a low-risk situation (Li et al., 2019).
The difference between our findings and results in the literature regarding the relationship between trust and risk could be explained based on situational variations and the dynamic nature of trust. To elaborate more, trust and risk may not be correlated in certain situations, such as when the perceived level of risk is very high or very low (Ha et al., 2020); trust is a dynamic construct that can change over time. An individual may have a high level of trust in an entity at one point in time and a low level in another (Li et al., 2019). This may make it hard to correlate trust with risk. Further research is required to confirm this relationship.
5.2 Decision-making using AI
Our findings show that the perception of AI workload is positively related to AI-driven clinical decision-making, thereby supporting the Limited Capacity Model of Motivated Mediated Processing theory. Based on this theory (Lang, 2000), individuals have a limited amount of cognitive resources or attention that can be allocated to decision-making processes (Radlo et al., 2001). When cognitive resources are depleted, individuals are more likely to use mental shortcuts or simplified rules of thumb in making decisions, increasing the likelihood of errors (Goldsmith, 2017). Several other studies have also supported the idea that workload and decision-making are related; for example, using an electronic clinical decision-support tool to enhance medical decision-making leads to decreased cognitive workload in a simulated setting (Richardson et al., 2019). Another study assessing the decision-making processes of examiners in an observation‐based clinical examination reported that cognitive processes in complex situations could be correlated with the mental workload. The study suggested that increased workload can hinder decision-making abilities (Malau‐Aduli et al., 2021).
Our findings also support the Prospect Theory and identify a negative relationship between risk perception and AI-driven decision-making. Prospect Theory (Levy, 1992) explains how risk affects decision-making. It argues that, for decision-making, people are more sensitive to losses than to gains, a phenomenon known as "risk-seeking for gains, risk-aversion for losses." Our study also reports a positive correlation between trust and decision-making. Although it is intuitive that clinicians with more trust in AI will positively perceive AI-based decision-making, the Trust Game model (Dasgupta and Gambetta, 1988) can also be used to support this relationship. This model posits that trust is a multidimensional construct that can positively and negatively affect decision-making.
5.3 Recommendation for better AI integration to support AI-driven decision-making.
In this study, we have examined the relationship between healthcare professionals' trust in AI, their perception of AI risk and workload, and the impact of AI on clinical decision-making. As we discuss these findings, we propose optimal integration approaches for AI in clinical workflows, which we believe could enhance clinicians' trust in AI, positively alter their perceptions of AI risk and workload, and improve their perception of AI-aided clinical decision-making.
Let's consider three hypothetical scenarios that involve a patient visiting a clinic for a pneumonia diagnosis using an X-ray image. In the first scenario, A, diagnosis occurs traditionally, without AI involvement. Scenarios B and C propose different methods for integrating AI into clinical workflows. By juxtaposing these scenarios against our survey findings, we gain valuable insights into how AI's practical integration into clinical workflows might influence clinicians' perceptions of AI risk, trust in AI, and the consequential effect on their clinical decision-making.
In Scenario A (Fig. 3), the clinician accesses the X-ray image and delivers the diagnosis to the patient. Here, the quality of care, particularly the diagnosis, heavily depends on the clinician's expertise. This scenario typically entails minimal risk; however, as the clinician's workload increases, the possibility of errors due to fatigue, burnout, or limited cognitive resources also heightens. This risk could be further magnified in low-resource clinics or when attending to critically ill patients.
We introduce an AI system in Scenario B (Fig. 4) to alleviate this workload and associated risks. Our findings that clinicians' perception of AI as a risk negatively influences AI-driven decision-making is embodied in this scenario. Here, the clinicians may feel that the AI could disrupt their workflow by offering unnecessary or incorrect analyses, adding perceived risk to the clinical decision-making process. The integration of AI in this scenario is sequential (Patient → AI → Clinician → Patient). However, the constant need for clinicians to validate AI diagnoses could disrupt their workflow and potentially lead to underutilization or misuse of the AI system.
In contrast, Scenario C (Fig. 5) posits a model where the AI system runs parallel to the existing clinical workflow. The AI and the clinician independently generate their diagnoses, and only in case of a discrepancy does the AI system alert the clinician. Our survey findings demonstrate that the perception of AI reducing workload significantly correlates with trust in AI and the perceived impact on clinical decision-making. Scenario C aligns with these findings, wherein AI operates as a supportive tool, providing an additional analysis layer without unnecessary interruptions, potentially reducing the perceived workload and fostering trust. Furthermore, clinicians' trust in AI showed a positive, albeit non-significant, association with AI-driven clinical decision-making. This pattern is also seen in Scenario C, wherein clinicians can build trust in AI as they are engaged in understanding and correcting the AI's reasoning, thus enhancing their willingness to incorporate AI into their decision-making process.
Overall, these scenarios illuminate the potential benefits of a parallel integration of AI into clinical workflows (Scenario C) over a sequential one (Scenario B), with potential positive impacts on clinicians' perceptions of AI risk, trust in AI, and their willingness to adopt AI in clinical decision-making.
5.4 Limitations
This study has some limitations that should be acknowledged. Firstly, this study did not find a significant impact of clinical experience as a control variable on clinicians' trust in AI or the AI-driven decision-making process. This finding contradicts existing evidence suggesting that additional experience with AI can influence trust levels. The specific context in which AI was utilized and the limited scope of participants' exposure to AI technologies may have contributed to this non-significant relationship. Caution should be exercised when generalizing these results, as they may not fully capture the nuanced relationship between clinical experience and trust in AI. Further research with larger and more diverse samples is needed to better understand the influence of clinical experience on trust in AI within healthcare settings. Secondly, the study is conducted based on a cross-sectional survey. Future studies should use longitudinal data and examine the proposed relationships over time. Finally, another limitation of our study is that only a small proportion of participants (17%) reported using AI in their practice. It is important to acknowledge that the low percentage may not necessarily reflect the actual usage of AI among all participants. It is possible that a significant number of participants are utilizing AI in their practice without being aware of it and vice-versa. This lack of awareness could be attributed to various factors, such as a lack of understanding about the specific applications of AI or the absence of clear recognition of AI technologies within their practice settings. Therefore, the reported usage rate might not provide a comprehensive picture of the actual integration of AI in the participants' professional activities. Future studies could explore participants' levels of awareness and knowledge regarding AI to gain a more accurate understanding of its utilization in their practice.