Artificial intelligence (AI)-driven applications have the potential to significantly improve the accuracy, efficiency, and reliability of diagnosis of coronary artery diseases (CADs). Using large datasets of previous heart images, AI algorithms have been trained to identify CADs by detecting abnormalities such as wall motion abnormalities, cardiac valves’ dysfunction, and ischemia [1]. However, the use of AI applications in healthcare remains relatively low due to several unresolved challenges facing AI in healthcare [2]. There is a paucity of research on determinants of the use of AI-driven applications from the perspective of medical practitioners across specialities [3]. The extant research, however, suggests that a perception that AI applications may produce potential false negative diagnoses (thus posing a considerable threat to the patient’s health) hinders the widespread adoption of AI applications [2]. Other features of AI applications such as explainability and transparency are believed to affect the adoption of AI-enabled applications in clinical care [4–7].
Focusing on cardiac care specifically, contextual factors such as perceived threat to professional autonomy, fear of replacement, concerns about patient safety, and legal liability of misdiagnosis hinder the widespread use of AI applications by cardiologists [8,9]. The challenge of a limited evidence base around AI in cardiac care is exacerbated as evidence base in this area dates rapidly due to changes in technology and attitudes.
These issues raise several questions regarding the use of AI tools in AI health care. In this research, we address three questions below: Do cardiologists have a willingness to trust AI for the diagnosis of heart diseases and factors that predict this? How do clinicians perceive the potential consequences of using AI for their clinical autonomy and responsibility? How do contextual factors such as risk associated with the use of AI applications influence the intention to use AI?
Review of conceptual models
Few frameworks are used to study the use of technology in health care. Technology adoption frameworks and trust theoretical frameworks present two major perspectives to study the use of AI tools in healthcare. In this research, we used the terms ‘AI tool’ and ‘AI application’ interchangeably, however ‘AI tool’ is a technical term typically referring to the technologies or software libraries used to develop and implement AI solutions. In contrast, AI application refers to the systems and solutions that deploy AI technologies to perform analysis or identify patterns [10].
Within technology adoption frameworks, a dominant framework is the Unified Theory of User Acceptance of Technology (UTAUT), which integrates eight prominent technology acceptance theories in the healthcare service adoption literature [11]. Building on this framework, Praksash et al. report that the future use of AI is determined by performance expectancy, effort expectancy, social influence, initial trust, and resistance to change [4].
The seminal framework of the Organisational Model of Trust developed by Mayer et al. [12] presents the second major perspective to study the use of AI tools. This framework provides a basis for most studies around trust in AI and similar technologies under the umbrella term 'automation'. This framework was originally developed for studying organizational trust, but it has been well-received by researchers studying trust in AI and automation [13–15]. It has six primary components, of which only one is trust itself and those others are antecedents, context, and product of trust. Factors of perceived trustworthiness, trust, propensity of trustors, risk-taking behaviour, perceived risks, and outcome are key components of the trust model [12]. A review study reports that most studies about trust in automation study variables which fit into one or more components of Mayer’s organisational model of trust [16]. Starke et al. assess three features of AI tools consisting of reliability, competence, and intention to assess physician trust in AI systems [17]. They emphasize the relevance of contextual factors and concerns in the use of AI systems; for instance dependence on AI, loss of professional autonomy, and the fear of losing control by adopting the AI systems that in the future may disrupt patient-physician relationships [17]. While helpful in identifying these factors, their framework provides no explicit top-level constructs, nor specifies the relationships between them.
In this research, we adapted the organizational model of trust to accommodate the research question of this study. Since most healthcare professionals have no or limited access to AI tools in their day-to-day practice, measurement of actual usage would lead to biased conclusions about the adoption of AI tools in healthcare. Intention rather than actual use would be an accurate outcome variable in the adoption model [18].
Our conceptual model and hypotheses
The current paper builds a conceptual model based on Mayer’s Organisational Model of Trust [19]. We slightly adapted the Mayer model of trust for our study and extended it by adding two constructs innovativeness and peer influence (Figure 1) based on previous studies [20]. The components of the trust model consist of the factors of perceived trustworthiness, trust, propensity of trustors, risk-taking behaviour, perceived risks, and outcome. This model starts with the factors of perceived trustworthiness which determine the level of trust. The model differentiates between trust (as an attitude) and outcome of trust (intention to use AI in the future, an intention). This distinction in our study implies that there might be trust in AI but it may not lead to the use of AI in cardiac care settings.
Trust is associated with risk-taking behaviour (in the current case, future use of AI systems, described in the original model of trust as risk-taking in relationships). The outcome (in the original model) refers to the consequences associated with the use of technology by cardiologists (i.e. clinical outcomes) which was not considered in this study. Instead of actual use of AI applications, we considered an intention to use AI in future. Below we describe our model’s components.
Ability, benevolence, and integrity determine perceived trustworthiness [19]. Ability refers to skills, quality, and characteristics of AI tools, which are context- and tool-specific and are believed to be significant contributors to the formation of trust [13,16,17]. In the context of AI in cardiac care settings, ability measures the accuracy of diagnosis made by AI applications, reliability, and whether AI applications produce results without breaching security. The second factor, benevolence, refers to a situation where an AI tool intends to do good for users. Integrity refers to situations where the AI tool adheres to a set of principles that the user finds acceptable. We postulate that ability (hypothesis (H)1), benevolence (H2), and integrity (H3) are positively associated with trust in AI.
As a propensity of cardiologists, knowledge of AI applications (both theoretical and practical) affects trust in AI. We postulate that knowledge of AI applications next to ability, benevolence, and integrity are associated with trust (H4). Trust in AI is the key intermediate outcome linked to usage.
Based on the Mayer model, trust itself determines risk-taking to use technology and thus intention to use [12]. We postulate that trust in AI has a positive impact on an intention to use AI (H5).
Li et al. reported a direct association between social influence and trusting beliefs in information system settings [20]. They argue that when individuals have inadequate knowledge and lack hands-on experience with technology, it is highly likely that they rely on trusted others’ opinions and incorporate their views into trust formation [21]. We postulate that peer influence (in terms of others being greater adopters) has a positive influence on the future use of AI applications (H6).
Personal innovativeness refers to the willingness of an individual to try out an innovation. Studies support that individuals with a high degree of innovativeness are more likely to use computer-based applications and have positive perceptions about the ease of use of technology [22,23]. We postulate that personal innovativeness has a positive impact on the future use of AI applications (H7).
The ‘perceived risk’ in the Mayer model of trust, refers to situational factors that, next to trust in AI, determine the use of AI. These may include how AI is perceived in working environments. The rise of AI tools and their potential have triggered fear of a substitution crisis (where valued roles and livelihoods are taken over by AI tools). In clinical settings, the capacity of AI tools to imitate human decision-making processes poses threats that skills and competencies possessed by physicians may be replaced by AI tools and thus can replace physicians [24]. We therefore postulate that perceived substitution threats have a negative influence on intention to use AI applications in future (H8).
Alongside these hypothesized relationships, we also aimed to explore what cardiologists felt were the key risks and benefits of the use of AI, and how they would likely respond in instances when their clinical judgement and that of an AI system diverged.