Cross-cutting themes and asymmetries
To our knowledge, this scoping review is the first of its kind to analyze the ethical issues related to AI and health. Illustrated in the literature reviewed were overarching ethical concerns about privacy, trust, accountability, and bias, each of which were both interdependent and mutually reinforcing. Accountability, for instance, was a noted concern when considering who ought to bear responsibility for AI errors in patient diagnoses [51, 56, 57], while also a recognized issue in protecting patient privacy within data sharing partnerships [2]. The security of confidential patient data, in turn, was identified as critical for eliciting patient trust in the use of AI technology for health [45]. One suggestion offered to combat the threat to citizen trust in AI is through an inclusive development process [53], a process which has also been proposed to mitigate bias integrated into algorithm development [63]. These broad ethical themes of privacy and security, accountability, bias, and trust, are not unique to health, but rather transcend multiple sectors, including policing, transportation, military operations, media, and journalism [98, 99]. It is also clear from our review that the aforementioned ethical themes cannot be considered in isolation, but rather must be viewed in relation to one another when considering the ethics of AI in health.
An asymmetry in the literature was the predominant focus on the ethics of AI in healthcare, with less attention granted to public health, including its core functions of health promotion, disease prevention, public health surveillance, and health system planning from a population health perspective. Yet in the age of ubiquitous computing, data privacy for use in public health surveillance and interventions will be all the more critical to secure, as will ensuring that individuals and communities without access to the latest technologies are not absent from these initiatives. In a recent article, Blasimme and Vayena [100] touched upon issues of consent when employing AI-driven social media analysis for digital epidemiology; the ethics of ‘nudging’ people towards healthier behaviours using AI technology; and developing paternalistic interventions tailored to marginalized populations. These public health issues and others merit further exploration within the ethics literature, particularly given how powerful such AI applications can be when applied at a population level. From an alternative perspective, the increasing presence of AI within healthcare may in some respects pose a risk to public health, with an expressed concern that the ‘hype’ around AI in healthcare may redirect attention and resources away from proven public health interventions [91, 101]. Similarly absent in the literature was a public health lens to the issues presented, a lens which rests on a foundation of social justice to “enable all people to lead fulfilling lives” [102, p. 5]. With respect to jobs, for example, the pervasive discourse around care robots in the literature suggests that there may be a wave of robots soon to replace human caregivers of the sick, elderly, and disabled. Despite this recognition, however, the focus was solely on the impact on patients, and there was little mention given to those caregivers whose jobs may soon be threatened. This is true also for other low-wage workers within health systems at large, despite the fact that unemployment is frequently accompanied by adverse health effects.
A second asymmetry in the literature was the focus on HICs, and a notable gap in discourse at the intersection of ethics, AI, and health within LMICs. Some articles mentioned the challenges of implementing the technology in low-resource settings [20, 32, 65, 90, 91, 94], and whether its introduction will further widen the development gaps between HICs and LMICs [90], however absent in most was the integration of ethics and/or health. Yet AI is increasingly being deployed in the global south; to predict dengue fever hotspots in Malaysia [2], to predict birth asphyxia in LMICs at large [103], and to increase access to primary screening in remote communities in India [32], to name a few examples. Despite these advancements, in LMIC contexts there are challenges around collecting data from individuals without financial or geographic access to health services, data upon which AI systems rely [65, 103], and a further challenge of storing data electronically [65]. The United States Agency for International Development (USAID) and the Rockefeller Foundation [104] have recently illuminated some additional considerations for the deployment of AI in LMICs, one in particular being the hesitancy of governments and health practitioners to share digital health data for concern that it could be used against them, as digitizing health data is often quite politicized for actors on the ground. Given the infancy of these discussions, however, there is far more work to be done in order to critically and collaboratively examine the ethical implications of AI for health in all corners of the world, to ensure that AI contributes to improving, rather than exacerbating health and social inequities.
Towards ethical AI for health: what is needed?
Inclusive and participatory discourse and development of ethical AI for health was commonly recommended in the literature to mitigate bias [63], ensure the benefits of AI are shared widely [2, 63, 65, 87], and to increase citizens’ understanding and trust in the technology [2, 34, 53]. However, those leading the discussion on the ethics of AI in health seldom mentioned engagement with the end users and beneficiaries whose voices they were representing. While much attention was given to the impacts of AI health applications on underserved populations, only a handful of records actually included primary accounts from the people for whom they were raising concerns [2, 45, 82, 105, 106, 107]. Yet without better understanding the perspectives of end users, we risk confining the ethics discourse to the hypothetical, devoid of the realities of everyday life. This was illustrated, for instance, when participants in aged care challenged the ethical issue of care robots being considered deceptive, by stating that despite these concerns, they preferred a care robot over a human caregiver [82]. We therefore cannot rely on our predictions of the ethical challenges around AI in health without hearing from a broader mosaic of voices. In echoing recommendations from the literature, there is an evident need to gain greater clarity on public perceptions of AI applications for health, what ethical concerns end-users and beneficiaries have, and how best they can be addressed with the input of these individuals and communities. This recommendation is well aligned with the current discourse on the responsible innovation of AI, an important dimension of which involves the inclusion of new voices in discussions of the process and outcomes of AI [108].
In addition to taking a participatory approach to AI development, there is a responsibility for all parties to ensure its ethical deployment. For instance, it should be the responsibility of the producers of AI technology to advise end users, such as HCPs, as to the limits of its generalizability, just as should be done with any other diagnostic or similar technology. There is a similar responsibility for the end user to apply discretion with regards to the ethical and social implications of the technology they are using. For instance, in the case of AI-enabled gene editing technologies, deep learning can assist in recognizing genetic patterns and providing direction as to where in the genome edits should be made, however they are unable to distinguish the moral difference between patterns that lead to disease prevention and general human enhancement [20]. This may in some ways be analogous to an autonomous vehicle that can identify a child and a tree in isolation, however when presented with a collision scenario where it has an equal chance of hitting both, it is unable to morally differentiate between the two. Simply stated, we must be critical and discretionary with regards to the application of AI in scenarios where human health and wellbeing are concerned, and we must not simply defer to AI outputs.
Also in need of critical reflection, as it remains unresolved in the literature, is how to appropriately and responsibly govern this technology [20, 32, 36, 40, 47, 90]. The infusion of AI into health systems appears inevitable, and as such, we need to reconsider our existing regulatory frameworks for disruptive health technologies, and perhaps deliberate something new entirely. Given the challenge that many have termed the ‘black box’, illustrative of the fact that, on the one hand, AI processes operate at a level of complexity beyond the comprehension of many end-users, and on the other, neural networks are by nature opaque, the issue of governance is particularly salient. Never before has the world encountered technology that can learn from the information it is exposed to, and in theory, become entirely autonomous. Even the concept of AI is somewhat nebulous [2, 45, 109, 110], which threatens to cloud our ability to govern its use. These challenges are compounded by those of jurisdictional boundaries for AI governance, an ever-increasing issue given the global ‘race’ towards international leadership in AI development [111]. Thirty-eight national and international governing bodies have established or are developing AI strategies, with no two the same [111, 112]. Given that the pursuit of AI for development is a global endeavour, this calls for governance mechanisms that are global in scope. However, such mechanisms require careful consideration in order for countries to comply, especially considering differences in national data frameworks that pre-empt AI [36]. These types of jurisdictional differences will impact the ethical development of AI for health, and it is thus important that academic researchers contribute to the discussion on how a global governance mechanism can address ethical, legal, cultural, and regulatory discrepancies between countries involved in the AI race.
One potential limitation to this study is that given the field of AI is evolving at an unprecedented rate [1], there is a possibility that new records in the academic and grey literatures will have been published after the conclusion of our search, and prior to publication. Some recent examples of related articles have very much been in line with our findings, drawing light to many of the pertinent ethical issues of AI in healthcare discussed in the literature reviewed [113–119]. Few, however, appear to have discussed the ethical application of AI in LMICs [104, 120] or public health [104, 117], so despite any new literature that may have arisen, there is still further work to be done in this area. Furthermore, given our search strategy was limited to the English language, we may have missed valuable insights from publications written in other languages. The potential impact on our results is that we underrepresented the authorship from LMICs, and underreported the amount of literature on the ethics of AI within the context of LMICs. Nevertheless, this scoping review offers a comprehensive overview of the current literature on the ethics of AI in health, from a global health perspective, and provides a valuable direction for further research at this intersection.