Participants believed that a relative advantage of AI as an innovation to be implemented in healthcare lies in its effective and comprehensive management of large volumes of data from different sources, and in particular, data from the data-warehouse of the organization itself. Participants saw AI as part of a necessary development as healthcare would not, in the long run, be able to keep up with the population's healthcare needs. The application of AI technology was thought to enable decision-makers to allocate resources where they are most needed in the organization. Management decisions for organizational changes in primary care units and hospitals were considered to be supported through the aggregation of data on outcomes from various care activities at multiple sites.
The participants also perceived AI´s potential for supporting professional decision-making in clinical care. AI was specifically perceived to be able to contribute by its ability to analyze images from digital imaging systems with a high level of precision and time effectiveness. The capacity of AI was perceived to be superior to the human analyzing ability of even very clinically experienced professionals in that it was not only more efficient and precise but also less biased.
AI was not perceived as replacing the need for human interaction between caregivers and patients, as this would provide other information, such as the patient's preferences and state of mind. On the other hand, the participants thought that AI could be equivalent to a colleague as a “second opinion” in situations of uncertainty.
The participants described that AI could serve as a warning or a “yellow flag” for alerting healthcare professionals to clinical data that needs to be taken into account given a certain situation and specific conditions. The participants considered this uncontroversial, as they saw AI as just another tool to help healthcare professionals in their clinical work.
The participants had great expectations for the possible AI-based applications that could come in the future. “Digital triage” was perceived as an attractive idea to empower patients in their own care and to achieve a more effective care as self-help for some patients. This was expected to generate more time for vulnerable patients. They also envisaged that standard health information could be collected from the patient and an AI-informed selection of laboratory tests could be completed prior to the primary healthcare visit, making the patient-provider encounter more informed and time-efficient.
Another aspect of perceived relative advantage was the potential of AI for discovering previously unknown patterns of care flow and its early detection of disease, facilitating health predictions for individual patients or groups of patients at risk. The participants highlighted the AI algorithm's ability to impartially discern clinical patterns based on multiple data sources without the need for prior clinical training, preunderstanding, and assumptions.
Regarding intervention source, the participants thought that AI would primarily be internally developed in the near future. They thought that their organization had some readiness to develop AI because of a relatively long history of investing in AI. Strategic leadership in the county council was perceived to have supported the AI development and research early on, which led to a perception of local ownership of the AI development in the county council. AI as an innovation was perceived as a “hot topic”, and collaborations with universities and other actors such as companies were seen as strategic for the county council to take advantage of this “window of opportunity”. Some even expected local healthcare professionals to actively participate in the development of AI-based applications for use in their own field of interest.
Networking around the use of AI within the larger national healthcare system was perceived as a slower and more cumbersome process than regional collaboration with specific tech companies. However, the participants believed that many tech companies were not equipped to follow the accepted quality and safety standards in healthcare, which led to hesitations about relying on them.
Although the participants rated evidence strength and quality to be of key importance for the implementation of AI they perceived this to be highly questionable. The participants felt they lacked control over the long chains of data processing and perceived that they had no insight into which process decisions were made along the way, for which reasons, and by whom. The participants thought that due to data being transformed between systems, aggregated and repackaged, then the original data would be increasingly difficult to retrieve and use.
The participants perceived that the mathematical complexity of AI prohibited an easy understanding of the reason that lay behind the information presented by AI; they characterized AI as a “little black box”. One of the reasons for questioning the evidence strength of AI-based applications was that the knowledge-base and data behind AI could not be verified in traditional and transparent ways like reading up on relevant scientific research findings.
The participants perceived AI to have a degree of adaptability, but they also believed it to fit more naturally in some clinical contexts than in others. Care units with medical imaging techniques as an important work tool were perceived as being especially prepared for making changes towards AI-based diagnostic support. Areas that were highlighted were radiology, pathology, clinical laboratory medicine, and dermatology. The participants thought that using AI would encounter barriers in other work units because of the perceived need to protect sensitive personal data.
AI was perceived to be a potentially useful tool in the future when healthcare professionals meet a patient with diffuse symptoms that have been presented in different forms over time, such as in mental illness. Through tracking a patient’ s history, AI could provide a comprehensive picture of the patients' health status, which in turn could facilitate a better understanding of the problem.
When discussing trialability, the participants suggested testing AI on a small scale in the organization but did not discuss any ways of retracting the implementation if necessary. The current use of AI for managing care in the organization was experienced as gaining importance and was tested in an ongoing process. The participants expected to be able to test a small number of AI-based applications in clinical contexts within the next few years, partly due to them having observed that IT-systems had prepared the technical infrastructure for AI “behind the scenes”.
Diagnostic tools in the shape of digital imaging tools based on AI were seen as being feasible for testing in clinical use, but the participants also perceived it would be necessary to create more opportunities for testing other AI algorithms developed for use in care processes in clinical practice. The participants tentatively discussed where new AI-based applications could be appropriate and feasible, e. g. in situations that are a step away from patient-provider encounters. They also reasoned about the usefulness of AI in situations of broader diagnostic uncertainty, for example, in primary care consultations. Some participants perceived that AI is already more or less informally present in some clinical contexts.
The participants perceived that the design quality and packaging of AI will be important for the future implementation. They imagined that AI applications in healthcare need the AI component to be as simple to use as possible while at the same time being designed to target complex problems that healthcare professionals need help in solving.
The participants perceived that most healthcare professionals currently had little knowledge about AI, with the technology having “brave new world” connotations for them. The participants mentioned that limited trust in AI technology could be circumvented by providing objective background information about the product, including its limitations.
The participants were not convinced that the healthcare organization itself could manage to design the AI applications without external expertise. They perceived that developing the AI models and algorithms was not sufficient and that the technical functions of AI needed to be integrated into user-friendly products for healthcare professionals to use. They also perceived that future IT infrastructure development was necessary for integrating AI into IT systems for ease of use. The participants believed that AI could have different designs based on the same data but tailored to users of different professional backgrounds and patients with different levels of digital literacy and health literacy.
The participants perceived multifaceted complexity in the implementation and use of AI. They believed that there are many competing and occasionally conflicting opinions about what AI is and is not. Decisions about investments in AI were the remit of top-level management, but the participants expressed a lack of guidance for decisions connected to AI. They wanted their decisions to be based on a thorough knowledge of the AI technology itself and which type of problems it can be expected to solve. However, they did not find it clear which criteria should be used for decision-making about how and where to start applying it.
The participants highlighted that collecting large volumes of data was not realistic at present, as health data were fragmented in the system, and current IT systems were not mutually compatible. Sharing data and exchanging knowledge between county councils was expected to be difficult, as different county councils make independent choices concerning how to build the data warehouses and which technologies and suppliers to invest in. The participants perceived risks of privacy violation during managing, monitoring, and storing large volumes of sensitive health data from many different data sources, involving different IT systems and numerous staff in technical and medical capacities and storage in commercial facilities. They also said that current legislation prohibits data sharing between different caregiving agencies in county councils and municipalities.
The participants perceived that implementing AI is complex and will impact patients and healthcare professionals. AI will need to be adapted to many factors, including different levels of digital literacy, professional fields of interest and levels of technological know-how and will involve leadership at all levels. The participants experienced that processes of change tend to move very slowly in healthcare. In addition to professional change resistance and organizational barriers, a high level of skepticism around AI was to be expected. They thought that healthcare professionals could experience AI as alien and as challenging to their professional role.
Not being fully cognizant of the scope and depth of knowledge in AI was thought to have consequences for patient safety in clinical practice. The participants perceived that there were risks of staff becoming overly reliant on knowledge provided by AI, which could lead to more limited use of clinical reasoning. They highlighted that repeatedly exercising professional judgment was necessary for developing clinical expertise over time. This was seen as especially important for younger professionals, but even more experienced clinicians could risk becoming overly confident in AI-informed decision support.
Some participants perceived that it was possible to take the positive results of using AI as proof of its effectiveness but highlighted the risk of bias in the data that feeds into the technology and how health data were processed. They felt strongly about the need for quality and safety control of AI-based applications, considering the consequences of faulty and skewed data-processing in AI and the large potential impact of AI imperfections on management and health outcomes.
The participants expected that societal debate would need to precede some of the transformative changes for healthcare professionals and patients. The complexity also included that AI might provide healthcare professionals and patients with previously unavailable information, which they are currently unequipped and unprepared to deal with. They thought that healthcare would change profoundly towards being more prevention-focused in the future, with citizens expected to be in charge of managing their own health.
The participants could not estimate the costs of AI technology at present but perceived that no state-allocated resources were available for more large-scale implementation and roll-out of AI. The participants had varying perceptions of the success of the organization’s efforts in developing AI so far, with some applauding the current and past AI development efforts, while others instead doubted what the outcome was of the resources that the organization had used for AI-development so far.
The participants were uncertain about the level of costs involved in the future larger-scale implementation of AI but feared that some currently ongoing research and development projects could suffer. Although the purchase of AI-products and IT-system capacity was thought to be costly, some participants thought that the IT technology infrastructure was ready and able to accommodate AI, which would alleviate costs. The cost of product development by external companies was perceived as a barrier to implementation in the short term, as procurement procedures at present do not apply to AI. In the longer perspective, the participants expected that the organization could incur financial costs for purchase, support, and maintenance of AI technology. Future projections of costs were perceived to include recruiting AI-competent staff potentially.