To our knowledge this is the first study taking an in-depth look at the views and preferences of the public on the use of AI in analysing genomic data for medical purposes. Building an understanding of these perspectives is crucial for the development of automated tools and to ensure that the use of these tools aligns with the interests of the public.
While there is previous research exploring public views on the use of AI in a healthcare setting,5,16,17 any research exploring perspectives on these issues has been in the context of medicine broadly, with no discussion of the use AI in the reanalysis of genomic data. As such, how these previous studies relate to the use of AI in genomics is difficult to determine. Previous research indicates that genomics – as with other ‘omics’ fields – suffers from a lack of public understanding,18 making it even more critical to explore public perspectives.
Comfort with Artificial Intelligence in Genomic Medicine
In line with other studies,5,16,17,19 our findings suggest that the Australian public has a generally positive view on the use and development of medical AI tools. Our participants perceived the greatest benefits of AI in genomic medicine to be reductions in the time and cost required for analysis; increased volumes of data available for analysis; possible improvements in accuracy of analysis; and empowerment of patients, as found by others.5,17 However, participants were still concerned about the accuracy of AI and how their data might be stolen or misused. Similarly, our participants expressed a strong desire for human oversight of AI either by periodic auditing or review by a specialist, which is also in line with findings of previous research.5,17
These findings suggest that the public’s view of AI in genomics is not substantially different from their view on the use of AI in other areas of medicine. 5,17 However, the difference in our participants’ views between MRI brain scans and genomic data suggest that for some, the type of data being analysed can affect the level of trust in AI use. This aligns with the findings of Middleton et al.,20 who found in their research on collection and storage of genomic data that some participants described genomic data as being ‘special’ or ‘different’ compared to other forms of medical data. Further research is needed to clarify how the type of medical data may affect patient attitudes to AI analysis.
Impact of Artificial Intelligence on Accuracy and Bias in Genomic Medicine
Interestingly, the potential impact of AI on the accuracy of genomic results was viewed by participants as both a benefit and a concern. On one hand, AI may be able to see things in the data that humans cannot, increasing the accuracy of analysis and subsequently the diagnostic yield. On the other hand, depending on how it is programmed and trained, AI might be less likely to produce accurate results than humans. This is in line with previous research on public perspectives of AI in healthcare more broadly, 5,17 where participants were also found to hold these seemingly contradictory opinions.
Similar to our findings with members of the public, the potential for AI to reduce bias and improve accuracy of diagnoses in healthcare is often cited as a benefit by physicians and patients.15,20,21 However, there is an increasing body of research demonstrating how AI can inflate and perpetuate bias in medicine against already marginalised communities.11,12 In terms of bias, participants in our study believed that AI systems could be programmed to not incorporate the bias that human professionals have. However, they also recognised how bias could be unintentionally embedded into AI algorithms by their developers.
Participants discussed how the datasets used to train the AI – not just the algorithm the AI is built on – also had the potential to introduce bias into these systems. Yet researchers have proposed that the incorporation of bias into data collection, preparation, and AI development can be mitigated by having datasets that are as large and diverse as possible, being transparent about the demographic characteristics of the datasets, taking de-biasing steps in development of automated systems, and continual learning of AI systems as data is updated.10 Our findings suggest that the public agrees with these researchers on how bias should be mitigated in AI development.
Impact of Artificial Intelligence on the Security of Genomic Data
AI development and use in any medical context requires the collection and sharing of a large amount of medical data. This can lead to multiple issues surrounding data security and data autonomy for patients. Accordingly, how genomic data is stored was a prominent concern amongst participants in our study; they discussed their desire to know how their data is being used and distrust of companies that may be profiting from their data, referencing several recent high-profile data breaches. These findings echo concerns from previous research on public perspectives of medical AI17 and storage of genomic data.20,23,24 However, it contrasts with another study which found data security to be a relatively minor concern when compared with the infancy of the technology and distrust in AI companies.19 In line with previous research,25 several of our participants described how they felt less concerned with how their data was stored or used because of their previous medical experiences; instead, they were more interested in how genomic testing and AI may help them or their families.
Despite their concerns for data security, participants recognised the need for genomic data to be accessed and shared broadly for the improvement of AI diagnostic tools and for genomic medicine as a whole. Participants described how large and diverse datasets are needed to build accurate systems for diverse populations. This suggests that members of the public may be less concerned about the sharing of genomic data if they know it is going to help others.
Liability for Error caused by Artificial Intelligence in Genomic Medicine
ML can lead to the development of a ‘black box’ system, which means that it may not always be clear how or why an AI model gave the results that it did.4,8 It is also possible that through this process, AI systems may produce results that are outside of our current understanding of the relationship between genetics and health outcomes. In these cases, health professionals may not be able to determine whether the results from AI are accurate or should be trusted. If these results were given to a patient and later found to be inaccurate, it would be difficult to determine who would be liable for the possible harm that is caused. Participants in our study were interested to know who would be held accountable for these kinds of errors. However, when posed with this scenario, many participants were unsure, noting that if AI makes its own decisions, responsibility would be difficult to assign.
Discussions amongst our participants around the issue consider that liability may be dependent upon how the error occurred and the purpose of using the technology. Participants related genomic AI to other medical devices; if another medical device was to cause an error due to how it was designed, then it would be those responsible for building the device that would be liable for the error. Conversely, if someone were to use the device in a way that were unintended, or misinterpret the results that were given, then that individual would be liable. No clear preference for who should be held liable was found in our discussion, which contrasts with the findings of Khullar et al.,26 who found a strong preference by the public for physicians to be held liable for errors caused by medical AI over the developers of the tool. This area requires further research to determine more precisely how the public believes liability for error caused by genomic AI should be determined.
The aim of this research was to examine perspectives from a diverse sample of the Australian public, rather than a representative one. As such, some perspectives (e.g., parents of children with genetic conditions) may be overrepresented in our data, whereas others (e.g., men) are underrepresented. The findings of our research could be used to help build a survey on the use of AI in genomic medicine, which could more easily receive a larger and potentially more representative pool of respondents. While focus groups did discuss broad implications of AI in genomic medicine, the material used to inform participants on the topic prior to discussion focused primarily on AI in the context of reanalysis of genomic data. While this made the concepts discussed more tangible for participants, and aided understanding, our findings may not be as applicable to other uses of AI in genomic medicine, such as image-based evaluation of faces or MRI scans to determine gene-phenotype relationships. Similar future studies with a more direct focus on these specific uses of AI in genomic medicine should be performed to gain a more accurate depiction of the public’s perspective.
As the use of genomic sequencing and periodic reanalysis of genomic data become routinised, the need for AI tools to analyse this growing volume of data is quickly becoming apparent. Given this, it is important to understand how the public perceives the use of AI in genomic medicine to ensure that AI implementation aligns with the public’s interests and that any concerns, both warranted and unwarranted, are addressed before we expect it to be broadly accepted. Importantly, error and bias in AI systems need to be minimised through the inclusion of large diverse datasets and de-biasing systems. In addition, AI use should be constantly monitored and checked against professionals regularly to ensure accurate results. Further research is needed to determine how the public values these aspects of AI in genomic medicine relative to each other when trade-offs are required.