Background
Although the role of image-based AI in cancer research has been substantial, its impact on the clinical side has been limited so far. Physicians’ trust in AI, and its wider acceptability, has been significantly lower owing to its “black-box” nature, which raises liability questions concerning its use in the clinical context.
Methods
To comprehend the barriers in AI’s adoption, and to inform the future discourses in the human-centric and ethical design of AI, we designed and conducted semi-structured interviews with 7 imaging experts in the oncological domain.
Results
Data saturation was achieved despite the small sample size, gathering concordant emerging needs and recommendations. Our findings demonstrate the divergent nature and focus of clinical and research practices, with differing AI needs. AI is afforded a peripheral, and yet a crucial role of a “decision help”, which can enable oncologists and related imaging specialists (i.e. radiologists, radiation-oncologists and nuclear medicine physicians) to push the boundaries of biological reasoning in treating cancers. Furthermore, our interviewees emphasized the need to embody ethics and liability in designing AI systems, and the development of educational opportunities for AI and cancer experts to enable an integrative vision of image-based AI. To this end, specific design guidelines are provided to inform both the Human-Centered Design and AI researchers in order to meaningfully address the contextually-sensitive concerns and challenges around the adoption of intelligent interactive technologies in cancer care.
Conclusions
The existing impact of AI in the clinical practices is limited as compared to the clinical research. In the future, AI is afforded a peripheral role of a ‘decision helper’ which might enable doctors to better understand the peculiarities and subtleties of cancers, and support them in developing novel treatment methods. Finally, in order to develop physicians’ trust in AI and its wider acceptability in clinical oncology, designers would have to address the ethical and liability concerns in relation to the use of AI systems.