1017 articles were identified by database searching on the nominated 46 days, of which 431 were initially excluded for irrelevancy (Figure 1). Of the remaining 586, duplicates were removed (n=219) and then full texts were reviewed. Of the remaining 367 articles, 56 articles were removed because they did not address the use of AI for screening or diagnosis. A further 63 mentioned AI used for screening or diagnosis in passing but did not discuss it. Nineteen articles were duplicates that were not identified initially due to having different titles or source names. Finally, 43 articles were initially coded but later removed from the data after careful discussion since they were word-for-word reports on research abstracts. The decision to remove them was because they were a different genre: academic paper abstracts rather than media reports.
Of the final sample (n=136), the majority were articles from various news sources (78.7%; n = 107). The remaining 21.3% was comprised of press releases (n = 18), blog posts (n = 9) and magazine articles (n = 2). Across the week days, Wednesday had the highest count of articles (n=27; 19.9%), although they were distributed relatively evenly across Monday through Friday, with fewer articles published on Saturdays (n = 6) and Sundays (n = 12).
Table 2 - Health conditions addressed in each article
HEALTH CONDITION
|
COUNT
|
% TOTAL
|
Cancers (Multiple)
|
16
|
11.8%
|
Cardiovascular Disease
|
9
|
6.6%
|
Colorectal Cancer
|
8
|
5.9%
|
Breast Cancer
|
7
|
5.1%
|
Mental Health
|
7
|
5.1%
|
Alzheimer's Disease
|
6
|
4.4%
|
Lung Cancer
|
6
|
4.4%
|
Diabetic Retinopathy
|
5
|
3.7%
|
Kidney Disease
|
5
|
3.7%
|
Prostate Cancer
|
4
|
2.9%
|
Eye Conditions
|
3
|
2.2%
|
Bowel Cancer
|
2
|
1.5%
|
COVID-19
|
2
|
1.5%
|
Intracranial Haemorrhage
|
2
|
1.5%
|
Neonatal Conditions
|
2
|
1.5%
|
Suicide
|
2
|
1.5%
|
Various[1]
|
21
|
15.4%
|
Other
|
29
|
21.3%
|
TOTAL
|
136
|
|
Whilst some articles addressed multiple health issues or discussed AI in screening and diagnosis more broadly (n = 21), most of the articles addressed one specific health issue (Table 2). Most commonly this was cancer (n = 51; 37.5%), followed by cardiovascular disease (n=9; 6.6%).
The benefits of AI in screening and diagnosis were mentioned in 135 of the 136 articles (99.3%) whilst the ethical, legal, and social implications of the technologies were mentioned in only nine of the articles (6.6%).
Frame Analysis
Table 3 - Tally of articles in each frame. Descriptions of frames from Nisbet (19)
Frame
|
Count (%)
|
Nisbet Frame
|
Count (%)
|
Frame 1 – Social Progress
|
131 (96.3)
|
Social Progress
|
131 (96.3)
|
Frame 2 – Economic Development/ Conflict and Strategy
|
59 (43.4)
|
Economic Development
|
59 (43.4)
|
Conflict and Strategy
|
1 (0.7)
|
Frame 3 – Alternative Perspectives
|
9 (6.6)
|
Morality and Ethics
|
4 (2.9)
|
Scientific and technical Uncertainty
|
5 (3.7)
|
Pandora’s Box/ Frankenstein’s Monster/ Runaway Science
|
6 (4.4)
|
Public Accountability and Governance
|
5 (3.7)
|
Middle Way
|
3 (2.2)
|
After coding, we developed a plan for characterising and reporting on frame characteristics, which involved refining Nisbet’s eight frames into three. The Morality and Ethics, Scientific Uncertainty, Pandora’s Box, Public Accountability, and Middle Way frames all frequently co-occurred in a small group of articles which were combined into Frame 3 – Alternative Perspectives. Although all these frames were present in the articles, they were typically present as components of an argument rather than fleshed out arguments themselves. For example, an article may mention poor governance and scientific uncertainty together as an argument for a more cautious approach to screening and diagnostic AI. As such, this small group of articles shared a set of common characteristics and arguments. Combining them allowed for analysis of common themes. Similarly, the one instance of the Conflict and Strategy frame co-occurred with the Economic Development frame and shared conceptual traits, so these two Nisbet frames were combined into Frame 2. With a larger sample, it may have been possible to retain more of Nisbet’s original frame structure for analysis.
The Social Progress frame was identified in 96.3% of articles (n = 131) and Economic Development/Conflict and Strategy in 43.4% (n = 59). The Alternative Perspectives frame was found in only 6.6% (n=9) articles (Table 3).
Frame 1 – Social Progress
The social progress frame dominated the rhetoric and was the dominant narrative in most of the articles. Broadly, this frame described a necessity to develop strategies for overcoming diseases and ailments, which represent large burdens on the health system and cause preventable death and disease.
In the social progress frame, diseases were problematized, typically by highlighting that they had an “increasing incidence” [A105] (24) or were “the leading killer in the world” [A82] (25). Stories in the social progress frame typically implied problems were caused by inefficient current practices in screening and diagnosis which were characterised as “slow” [A21] (26), “subjective” [A10, A27, A195, A244] (27–30), “challenging” [A244] (29) and “manual” [A5] (31). It was sometimes reinforced that these inefficient practices were overwhelming doctors and impeding their workflow or damaging their ability to spend time engaging with their patients.
With these issues laid as a foundation, the moral judgement implied in the articles in the social progress frame was that AI in screening and diagnosis was a good and important, or at least an inevitable, solution to address disease morbidity and mortality more effectively. In many of these articles, comment was sought from those with a stake in either developing, researching, or implementing the technology. Quotes were selected which reinforced the salience of the technology, emphasising that the technology represented a “pivotal moment in healthcare history” [A42] (32).
At surface level, the suggested remedy was the AI screening or diagnosis technology (or in some cases, technologies) that the article was reporting on. This was clear in the rhetoric which, in contrast to their description of current screening practices, characterised AI screening and diagnosis tools with a different vocabulary. Whilst current practices were slow, AI was quick and simple [A252] (33); whilst current practices were subjective, AI was “quantitative” [A45] (34) and “objective” [A159, A45] (34,35).
These technologies were sometimes constructed as being key to a pivotal change in the healthcare system. Sometimes, the importance of quick and easy screening was described in light of a transition within health systems from treatment to prevention [A42] (32), or it was claimed that broader screening will lead to earlier identification of issues and thus better outcomes [A103] (36). This positioned AI as an important development towards lifting disease burden:
“… informed and strategically directed advanced data mining, supervised machine learning, and robust analytics can be integral, and in fact necessary, for health care providers to detect and anticipate further progression in this disease” [A88 (37); emphasis added]
Frame 2 – Economic Development/Conflict and Strategy
The Economic Development frame was the second most common of Nisbet’s frames found in the articles. It overlapped conceptually with the single example of Conflict and Strategy found in the sample and as such, they will be addressed as one for this analysis. All the articles in this frame coincided with instances of the Social Progress frame, so the arguments are not entirely distinct, with this frame tending to borrow from the strength of the Social Progress narrative. However, the Economic Development/Conflict and Strategy (ED/CS) frame tended to focus more dominantly on monetary rather than human costs, and commercial ventures rather than the diversity of projects reported on in the Social Progress frame.
Problem definition and causal attribution were often similar to the Social Progress frame with authors first problematizing the impact of a disease (or multiple diseases), and attributing the problem to slow, subjective, or inefficient current systems. Sometimes, however, articles in the ED/CS frame additionally discussed the monetary cost of that the disease represents (e.g. “In 2019, … dementias will cost the nation $290 billion” [A88] (37)).
The moral judgements made in the ED/CS frame were more economically focused than that in the Social Progress frame. These articles generally sought comments from individuals with commercial interests in the technologies being reported on. The worth and value of their commercial endeavours was often associated with their contribution to both social and economic progress, “delivering effective healthcare” [A148] (38) and moving toward “commercialisation” [A98] (39). Often in these articles, algorithms were described as products which were developed to “disrupt” [A74] (40) a “market” [A83, A137] (41,42).
The instance of the Conflict and Strategy perspective, in this case, was an extension of these values into venture capitalism where the article described the company responsible for development of the algorithm as aiming to become “one of the top radiogenomics networks in the United States” [A68] (43).
Implicit in this moral assessment was the argument that capitalist ventures such as these were important for social as well as economic progress. As such, the suggested remedy in these articles was again very homogenous, with articles tending to document the technologies developed by one individual company, or one company’s technology, which was the key to reducing the economic costs associated with a disease. Ergo, technologies tended to be represented as economic solutions to largely economic problems.
“By offering a method to track progression using only a mobile phone or tablet … the company aims to stem the cost of monitoring and screening for Alzheimer’s and related dementias in an aging population.” [A18 (44); emphasis added]
Frame 3 – Alternative Perspectives
Each of the Morality, Pandora’s Box, Scientific Uncertainty, Middle Way, and Governance frames from the Nisbet typology were present in some articles. However, they were indistinct from one another as they tended to be present, together, in articles that adopted a more neutral stance compared to those coded to other frames. As such, we dubbed the conglomeration of these frames, ‘Alternative Perspectives’. Nine total articles fit into the alternative perspectives frame, and generally more than one of Nisbet’s 5 initial frames which comprised the alternative perspectives frame were represented in each article (median 2; max 5). This Alternative Perspectives frame overlapped entirely with articles which discussed ELSIs. That is, the nine articles coded into this frame are the same nine which discuss ELSIs of healthcare AI. Table 4 outlines which ELSIs were discussed in the nine articles. Despite being relatively heterogenous within themselves, the articles which fell into Frame 3 were distinct in content and tone from the rest of the sample.
Five of the nine articles also coincided with occurrences of the Social Progress frame. So, in many of these articles the Social Progress narrative was also present and, in some cases, dominant. Often, both diseases and AI technologies were problematised, with the article framed as a discussion of both pros and cons of using these technologies.
“Of course, AI applications in sectors like healthcare can yield major social benefits. However, the potential for the mishandling or manipulation of data collected by governments and companies to enable these applications creates risks far greater than those associated with past data-privacy scandals” [A3] (45)
These stories implied that the issues related to AI were caused not by the AI technologies themselves, but by the harmful capitalistic values of those developing AI tools (Morality), the AI field’s lack of involvement with traditional medical research (Scientific Uncertainty), or the poor legislation and regulation surrounding AI that let it develop unbridled (Governance)..
“the values of AI designers or the purchasing administrators are not necessarily the values of the bedside clinician or patient. Those value collisions and tensions are going to be sites of significant ethical conflict” [A22] (46)
The moral judgement made in these articles was that a more careful approach was needed, to harness the important social developments associated with AI but to simultaneously implement more controls so the issues and value conflicts were better managed. Often, in contrast to the other articles in this sample, these authors would seek out field experts who were not involved with the development of the AI tool(s) in question, giving their argument greater credence through impartiality [A91, A260] (47,48).
Typically, the solution presented by these articles was for a more regulated and cautious approach to AI in screening and diagnosis. Doctors and those in AI development were implored to be ‘ethical’ [A93] (49) and it was proposed that only ‘explainable’ [A143, A22] (46,50) or ‘auditable’ [A22] (46) algorithms should be implemented.
Table 4 – ELSIs discussed in the nine alternative perspectives articles
Article no. (ref); title
|
Short description of ELSIs
|
A143 (50); Medical AI can now predict survival rates – but it’s not ready to unleash on patients
|
· Historical bias – algorithms that use historical data may produce biased outputs (e.g. algorithms may find a relationship between a disease and a minority group that has historically had worse access to healthcare)
· Black box systems – problems arise when doctors cannot access information about the features algorithms use to produce outputs.
· Physician deskilling – doctors may become over-reliant on algorithms to make decisions and lose the skills to make those decisions without the aid of algorithms
|
A22 (46); Paging Doctor AI: Artificial intelligence promises all sorts of advances for medicine. And all sorts of concerns.
|
· Harm to patients – if AI fails to integrate into workflows or is poorly validated for clinical use it may lead to worse patient outcomes
· Value tension between health and for-profit enterprise – AI is proprietary and there is a value collision with the bedside clinician
· Impact on clinician workflow – AI may be given authority over clinician workflow (e.g. patients’ insurers may only reimburse for the treatments an algorithm recommends, meaning clinicians lose their ability to exercise their own discretion in treating patients)
· Exacerbation of human bias – when algorithms are not designed to take structural inequalities into account, they will produce flawed results
|
A93 (49); Genetic Testing Companies Take DNA Tests To A Whole New Level
|
· Concerns about data privacy – using AI tools routinely will raise the need for better data protection regulations
|
A91 (47); From suicide prevention to genetic testing, there's a widening disconnect between Silicon Valley health-tech and outside experts who see red flags
|
· Lacking involvement with medical research – concerns developers of AI are not using normal channels for testing and disseminating algorithms. Claims that they make to consumers are unvalidated and the safety of innovations are not regulated.
· Poor transparency protocol in tech companies
· Value tension between health and for-profit enterprise – tech emphasises disruption and convenience, whereas healthcare emphasises safety. The values behind AI development conflict with the Hippocratic oath.
· Harm to patients – poorly implemented algorithms may lead to iatrogenic health impacts
|
A3 (45); The AI governance challenge
|
· Need for better data protection regulations
· Value tension between public and for-profit values
|
A113 (51); How A.I. Can Save Your Life
|
· Concerns about data privacy
|
A117 (52); How tech giants like Google are targeting the seismic NHS data goldmine
|
· Concerns about data privacy – private companies requesting access to public healthcare data
|
A8 (53); Addressing Cyber Security Healthcare and Data Integrity
|
· Concerns about data privacy
|
A260 (48); Vietnam: AI for early warning about liver cancer
|
· Inaccuracy of AI techniques
|
[1] Articles were coded as ‘various’ if they were not addressing a technology for one specific health condition. E.g. talking about AI in screening in general, or technologies used to screen for a wide range of diseases