Evidence-based medicine and research informs the decision-making process, policy, and services we provide in health care. In an ideal, learning healthcare system [22, 32], data-driven clinical decisions will be improved by digital health tools. These tools will be informed by a hierarchy of evidence, from quality improvement initiatives, improvement science through to implementation science. This knowledge to performance and data cycle [20, 22]; allows sense making ideally at patient, provider, and system level, driving better value healthcare [20, 31, 32]. Thus, the values in these systems should therefore embody values found in “other core societal values or health system objectives.” These include accessibility, adaptability, co-operative and participatory leadership, governance, inclusiveness, person focused, privacy, scientific integrity, transparency, and value in health care [32, 43]. The values that overlap with concepts of equity, fairness and solidarity is likely to contribute to a culture of adoption of new learning health systems that matter to society [32, 43].
There remains a paucity of methodological research about how we can best adopt new digital health technologies in clinical practice such as algorithm driven clinical decision support system tools. This includes how to optimise interactive design principles prior to and during implementation, and at the normative integration stage of clinical decision support tools to minimise failure events. We argue that impactful optimisation [44] of introduced digital health tools such as clinical decision support tools and its implementation, are a neglected area of research, siloed despite a plethora of long-standing design research and implementation science research in their own right. This is despite that design thinking is used in many other industries [19] that require the implementation of decision science principles. For example, Think Aloud experiments are a core part of eliciting key elements that inform user choice experiments in optimising transport and marketing preference of technology products [26, 33, 45]
Clinical decision support systems (CDSS) are moving away from rule-based algorithms to increasingly being driven by data rich and sophisticated artificial intelligence algorithms designed to improve the human user’s interpretative and predictive capabilities. The first principle of a CDSS is to provide support to the clinician’s decision making that is underpinned by their innate human-centred oath of providing the best possible care and avoid harm to their patients within their resources and capabilities. CDSS can provide information that has been computationally calculated faster or displayed in a more comprehensive and intuitive manner. The final interpretation of what the data means is ultimately left up to the clinicians. This retains the notion of the “human in the loop” [35] of clinical decision making.
The focus on delivering clinical decision support has mainly been on what content the tool (best data-driven algorithm) provides versus how we display the content intuitively for clinicians to be able to make better decisions. We believe focusing on designing valid content has to be prioritised at early development stage but the idea of merging user centred design principles with implementation science co-designing for the implementation stage (even in pre-planning of CDSS) will introduce inevitable efficiencies and help to foster the notion of acceptability when introducing change. This supports the concept of optimisation in implementation science [36, 44].
It is a well quoted metric that the gap between evidence generation and practice remains high, at 17 years to move merely 14% of innovations into practice [34]. No wonder as professionals in practice are required to integrate emerging evidence into their practice often with no bridge about how to implement or translate the new evidence. The cognitive load and behavioural changes are often unexpected and complex in respond to new evidence-based technologies.
Researchers seek to understand these insights by using a range of existing technology based frameworks (See related work) and implementation science frameworks (which are also influenced by social cognitive theories and cognitive behavioural neuroscience principles). There remains however an under-researched area at the trans-disciplinary intersection between interaction design and implementation science [2, 8, 15, 17, 23] for planning early stage clinical evaluation before a decision is made on deployment of clinical decision support systems. This gap is supported by a growing recognition of the utility in having early stage clinical evaluations that require a reporting guideline [42] for clinical decision support.
A Framework from Design to Implementation
We demonstrate a 3D interaction design implementation framework: a new transdisciplinary approach describing the methodology of mixing iterative design evaluation with implementation science evaluation for digital health interventions. This methodology describes the pragmatic implication of applying design principles pro-actively during the formative stage of an implementation science evaluation and then applying considered pragmatism from implementation science to iteratively produce a better design for the proposed system or product to be implemented and decided upon. Essentially, we are applying design iterations to implementation science and implementation science formative considerations to our design of a product or system in health to come up with a proposed end-product or system that is designed well for effective implementation of digital health solutions.
The framework consists of three stages, and though we present it in a linear format in Fig. 1, is it important to note that the approach is iterative and flexible. As we learn from one stage, it may be useful to gather more data, or re-evaluate the synthesis of ideas from another stage. The model includes the goals for collecting and synthesising insights from data. The first two stages align with a common design mantra: “design the right thing, design the thing right” (see the UK Design Council’s description: [4]), and augment this with a third focus on implementation of the design, which is not frequently combined with the design process.
The Design stage of our model focuses on understanding the problem space, identify requirements for a brief for the designed intervention, and identifying the goals that users will have when interacting with the design, and what tasks support this. In this context, a goal something that is not “completed” by the user’s interaction with the software, but a high-level objective. The concrete tasks, which have a clear start and finish, support the user in achieving the goal [40]. This stage includes collecting data using range of design methods (see [41] for more information), such as conducting observations and interviews about how people behave, what their needs are and the problems that arise from these needs that can be addressed by a design. This data can be synthesised into tools such as sketchnotes or personas [41] that describe requirements of the design intervention through high-level goals, and tasks that support the user to achieve their goal.
The develop stage of our model explores the how the tool being introduced should be presented to the user, the actual interface for software. It follows an iterative cycle of gathering data about how the user actually interacts with a functional prototype. Empirical evaluations are carried out to determine whether a specific population of users can carry out specific tasks with the prototype (see [25]), often using methods such as think-aloud testing, and standardised usability surveys [41]. The objective here is to identify issues with the interface by collecting data on how the design is used and update the interface to relieve the issues caused. These issues could be related to the overall usability, or to specific tasks that the user carries out with the interface.
The decide stage of our model explores what human behaviours and reactions would indicate implementation barriers or acceptance, to the newly iterated development in design. This would involve evaluation data at one or more contextual levels of the system, provider, or consumer or patient level. We might ask ’what if’, ’what would’ or ’how do’ questions to identify facilitators to usual practice. Pragmatic in-the-wild empirical hypothesis testing, using implementation science frameworks, would compare if the proposed perceptions of potential users of the new technology would match with actual psycho-social-physical behaviours observed and test our assumptions of facilitators for the user. Useful facilitators, both passive and active, contribute to a new schema that would survive the iteration for the next phase of implementation.
This proposed framework addresses a research gap – the interdisciplinary intersection between design thinking and implementation frameworks – that guides the required planning when developing early stage translation research involving algorithms usage in health settings that evaluate clinical utility, safety and human computer interaction factors[6, 42] and informed by our previous work understanding clinician’s needs from technology [1], and implementing health technology in the hospital [24, 46].
Case Study: Sepsis Dashboard
The study was conducted using the three phases to evaluate a sepsis risk tool dashboard over a one year period from March 2022. The aim of the first phase was to create a design formulation informed by an understanding of the context where the dashboard would be deployed. The aim of the second phase is to understand the impact of the technology itself and has the objective of designing the CDSS appropriately for the context in which it is deployed iteratively forming an improved design and implementation plan. The aim of the third phase is to evaluate the implementation of the dashboard and identify useful facilitators.
Phase 1: Design Research
The brief for this study was to develop CDSS tool for the Front of House Coordinator (FOHC) in the ED. The FOHC role is fulfilled by a registered nurse with a minimum of three years experience post-specialist ED training. To understand what they needed from a CDSS specifically in this role, an observation exercise was conducted, with two researchers spending one hour on two occasions (four hours total) observing the FOHC and triage process. This provided a naturalisic or ‘live’ setting to understand how the FOHC works, and their role within the ED workflow. The qualitative ‘shadowing’ approach employed in this case is useful as it is flexible and doesn’t rely on additional technology being developed for the study and allows the researchers to record behaviour in situ [10, 30].
The objective of this activity was to understand where the patient data flowed, and the reasoning that the nurses use in order to make decisions specifically related to sepsis. The researchers were able to ask questions from the nurses. From these activities, notes were compared and synthesized into a series of implications for the design of the CDSS, using a bottom-up approach, seen in Fig. 2. These were then discussed with the broader research group for feedback. From the observation and subsequent discussion, tasks and goals were developed, which shaped the way that the CDSS was to be used.
For this study the user’s goal was stated as being “to use the interface provided to identify people in the emergency department waiting room who may be at risk of sepsis, to ensure they have an appropriate level of care.” To support this goal, participants we developed the following tasks:
-
Use the interface to identify if there is a patient who you consider to be at risk of sepsis but is not assigned to an appropriate triage category.
-
Identify if there is a patient with an elevated risk of sepsis who may need further testing.
-
Evaluate the state of the patients in the waiting room, as if you have just started a shift or completed a handover.
A senior-level team of students studying computer science were given a project for one semester, to build a prototype of the dashboard, using an extreme programming (XP)development approach [9, 37] to create a prototype web-based interface for a CDSS, demonstrating the use of interoperable frameworks and displaying synthesized data. The goal of the students creating the dashboard was to understand what the CDSS could do, rather than how it appears.
The design phase “data sketch” outlining what was observed and some synthesis of core problems that the CDSS can help overcome.
Phase 2: Design Evaluation
The Design Research phase is focused on understanding the problem and the context of the people (users) whose work we will support through the CDSS. The second phase is to understand and evaluate the way in which the technology may be used by those people who are given access.
The web-based prototype dashboard created by the student team was used for an initial heuristic evaluation with researchers, and nurse managers in the ED. This activity was facilitated by a human-computer interaction design researcher and employed usability heuristics developed for information dashboards [18]. This process of evaluation also helped identify specific actions, or tasks, that the FOHC will complete using the dashboard, which is essential to develop the usability testing.
For the usability testing, we employed a think-aloud approach, a popular design method for evaluating a novel interface for the purposes of understanding general patterns of behaviour with a proprietary software interface (inspired by the prototype) and also for identifying issues with real (novice) users for troubleshooting purposes [27]. The think-aloud was also supported with three standardised surveys: Computer Self Efficacy Questionnaire (CSE), Single-Ease Questionnaire (SEQ) and System Usability Scale Questionnaire (SUS), as shown in Fig. 3. The think-aloud first involves an introduction to the protocol itself. The data was collected through a single survey hosted in Qualtrics and facilitated by a researcher. Participants first completed CSE to communicate their perceived efficacy with digital technology [12], in order to provide a baseline for their own level of comfort with new technology in the work environment. After this, the facilitator introduced the think aloud protocol.
For think-aloud evaluations, participants are instructed to use the interface while verbalising their internal thoughts (n = 8). During this time, the facilitator reminds the participants to keep talking as necessary, and answered any questions, but their role is not to help the participant use the interface. To scaffold the use of the interface, participants are given the high-level goal, and tasks introduced above, to contextualise the use of the dashboard. The goal was introduced before any tasks to complete using the dashboard. Tasks were then given in a random order to reduce bias.
A single question, post task usability questionnaire, CSEQ, was used to collect information on the difficulty of each unique task. It is a 7-point likert-scale response to the question “Overall, how easy or difficult was this task to complete?” with 1 being “very difficult” and 7 being “very easy.” This measure has been shown to be reliable and understandable for participants in usability evaluations [38]. After the completion of all three tasks, the participants completed the SUS questionnaire [28], a 10-question likert-scale survey that measures user perceptions about usability, learnability, complexity, and confidence of use [29].
Users were invited to online meetings using Microsoft Teams, and they shared their screens. Their monologue and interactions with the dashboard were recorded. Insights from this evaluation were used to improve the design of the dashboard, based on feedback from participants, through a bottom-up analysis of the recorded comments and interactions.
Phase 3: Implementation
The PARIHS framework conceptualises successful implementation (SI) as a function (f) of the evidence (E) nature and type, context (C) quality, and the facilitation (F), \(SI=f(E,C,F)\)[5]. This framework informed our prospective planning of key implementation strategies to be used during the early formative evaluation phase. We used this framework implicitly as part of the study design and used a participatory action research approach to address facilitators in implementation.
- In planning and delivering an intervention,
- In data analysis,
- In the evaluation of study findings, and/or
- In any other way.
The methods mentioned in Phases 1 and 2 are used throughout the development of the interactive interface to identify changes that may hinder the use or uptake of the system we are designing by the people who will ultimately be using it, as a formative design evaluation. In Phase 3, we evaluated the feedback received on how, or if, the user feedback research was incorporated and translated into practice to help with the development of implementation of a risk tool dashboard. The implementation phase has a comprehensive plan of research translation activities that require formative evaluation. The evaluation measures if the intended impact of planned implementation occurred or if barriers to translation of the planned implementation exist. If the latter occurs, an iterative cycle of how to address barriers is activated, implemented, and re-evaluated.