Cognitive Walkthrough for Implementation Strategies (CWIS) Overview
The Cognitive Walkthrough for Implementation Strategies (CWIS; say “swiss”) is a streamlined walkthrough method adapted to evaluate complex, socially-mediated implementation strategies in healthcare. As described below, CWIS is pragmatic [37] and uses a group-based data collection format to maximize the efficiency of information gathering. Importantly, CWIS differs from standardized patient walkthroughs, which are used to evaluate research study procedures – such as consent or screening [38] – or to assess the degree to which existing clinical services or health systems are responsive to patient needs [39]. Such approaches do not provide clear methods for specifying and prioritizing areas for implementation strategy redesign. As described below, the CWIS methodology includes six steps: (1) determine preconditions; (2) hierarchical task analysis; (3) task prioritization; (4) convert tasks to scenarios; (5) pragmatic group testing; and (6) usability issue identification, classification, and prioritization (see Figure 1).
Example application: Post-training consultation. Below, our descriptions of the CWIS steps are followed by a brief application to a commonly-used implementation strategy: post-training, expert consultation for clinicians. Consultation involves ongoing support from one or more experts in the innovation being implemented and the implementation process [5,40]. Given that studies consistently document that initial training alone is insufficient to effect changes in professional behavior [10,41,42], post-consultation has become a cornerstone implementation strategy for supporting the adoption and use of EBPs in mental health [43,44]. In our example, CWIS was used to evaluate a brief (2-8 week) consultation strategy intended for school clinicians who had recently completed a self-paced online training in MBC. The consultation strategy included (1) weekly use of an asynchronous message board to further support knowledge gains and accountability, as well as (2) live, biweekly group calls to discuss cases, solidify skills, and support the application of MBC practices with clinician caseloads.
Step 1: Determine Preconditions for the Implementation Strategy
Preconditions reflect the situations under which an implementation strategy is likely to be indicated or effective [45]. In CWIS, articulation of preconditions (e.g., characteristics of the appropriate initiatives, settings, individuals, etc.) is necessary to ensure a valid usability test of the target implementation strategy. Explicit identification of end users is a key aspect of precondition articulation and hallmark of HCD processes [46]. Studies indicate that, in the absence of explicit user identification processes, product developers tend to underestimate user diversity and, consequently, base designs on individuals like themselves [47,48]. In CWIS, if preconditions for implementation strategies are not met, the scenarios or users with which the strategy may be applied in subsequent steps will be non-representative of its intended application. For instance, the strategy, “change accreditation or membership requirements” [5] may require as a precondition clinicians or organizations who are active members of relevant professional guilds.
Example application. When applied to post-training consultation for MBC in the current project, the research team identified individual-level preconditions that made clinicians appropriate candidates to receive the consultation strategy. These included that clinicians provided mental health services in the education sector for some or all of their professional deployment; had expressed (by way of their participation) an interest in adopting MBC practices; and had previously completed the online, self-paced training in MBC practices that the consultation model was designed to support. Detailed personas (i.e., research-based profiles of hypothetical users and use case situations; [49] were developed to reflect identified target users.
Step 2: Hierarchical Implementation Strategy Task Analysis
Hierarchical task analysis includes identifying all tasks and subtasks that have independent meaning and collectively compose the implementation strategy [50]. Tasks may be behavioral/physical (e.g., taking notes; speaking) or cognitive (e.g., prioritizing cases) [51,52]. Cognitive tasks are groups of related mental activities directed toward a goal [53]. These activities are often unobservable, but are frequently relevant to the decision making and problem-solving activities that are central to many implementation strategies. In CWIS, tasks, subtasks, and task sequences (including those that are behavioral and/or cognitive) are articulated by individuals with knowledge of the strategy by asking a series of questions: First, for each articulated larger task or task category, asking “how?” can facilitate the identification of subtasks. Second, asking “why?” for each task surfaces information about how activities fit into a wider context or grouping. Third, asking “what happens before?” and/or “what happens after?” can allow aspects of task temporality and sequencing to emerge. All tasks identified in Step 2 can be represented either as a table or as a flow chart.
Example application. Tasks reflected in the existing MBC consultation model and tested in the CWIS study were originally informed by the core consultation functions articulated by Nadeem, Gleacher, and Beidas [40] (including continued training, problem-solving, engagement, case applications, accountability, adaptation, mastery skill building, and sustainment planning). Members of the project team with expertise in clinical consultation procedures identified the tasks and subtasks in the consultation model via an iterative and consensus-driven process that involved task generation, review, and revision. In this process, a task analysis of the protocol was completed using the three questions described above. Tasks were placed in three categories, depending on whether they related to live consultation calls, interactions with the asynchronous message board, or work required between consultation sessions. A list of hierarchically-organized tasks was distributed to the rest of the developers of the consultation protocol for review and feedback. The first author then revised the task list and distributed it a second time to confirm that all relevant tasks had been identified. A number of tasks were added or combined through this process to produce the final set of 24 unique tasks for further review and prioritization in Step 3 (see Table 1).
Step 3: Task Prioritization Ratings
Owing to their complexity, it is rarely feasible to conduct a usability evaluation that includes the full range of tasks contained within an implementation strategy. In CWIS, tasks are prioritized for testing based on (1) the anticipated likelihood that users might encounter issues or errors when completing a task, and (2) the criticality or importance of completing the task correctly. Likert-style ratings for each of these two dimensions are collected, ranging from “1” (unlikely to make errors/unimportant) to “5” (extremely likely to make errors/extremely important). These ratings should be completed by individuals who have expertise in the implementation strategy, the context or population with which it will be applied, or both. Tasks are then selected and prioritized based on both of these ratings simultaneously.
Example application. Tasks identified in Step 2 were reviewed and rated by four members of the research team with extensive experience in post-training consultation and MBC practices. Mean importance/criticality and error likelihood ratings were calculated across respondents (Table 1). Across tasks, the two ratings were correlated at r = 0.71. Top-rated tasks (i.e., those with high ratings on both importance and error likelihood) were selected for testing and used to drive scenario development (see below). One highly-rated task (“Log into message board”) was deprioritized since it was a fully digital process and could be readily addressed in a more traditional usability evaluation, rather than one with more complex behavioral or cognitive components. In all, Step 3 resulted in five consultation tasks being identified for testing in the CWIS process.
Step 4: Convert Top Tasks to Testing Scenarios
Task-based, scenario-driven usability evaluations are a hallmark of HCD processes. Once the top tasks (approximately 4-6) have been identified, they need to be represented in an accessible format for presentation and testing in cognitive walkthroughs. In CWIS, tasks from Step 3 are used to develop overarching scenarios that provide important background information and contextualize the tasks. Scenarios are generally role-specific, so the target of an implementation strategy (e.g., clinicians) might be presented with a different set of scenarios and tasks than the deliverer of an implementation strategy (e.g., expert consultants). CWIS scenarios provide contextual background information on timing (e.g., “it is the first meeting of the implementation team”), information available (e.g., “you have been told by your organization that you should begin using [EBP]”), or objectives (e.g., “you are attempting to modify your clinical practice to incorporate a new and innovative practice”). Tasks are sometimes expanded or divided into more discrete subtasks at this stage. Some scenarios might contain a single subtask while other scenarios might have multiple subtasks. Regardless, each scenario presented in CWIS should include the following components to ensure clear communication to participants: (1) a brief written description of the scenario and subtasks, (2) a script for a facilitator to use when introducing each subtask, and (3) an image or visual cue that represents the scenario and can be used to quickly communicate the subtasks’ intent.
Example application. Based on the prioritized tasks, the research team generated six scenarios for CWIS testing. These scenarios reflected common situations that users would be likely to encounter when participating in consultation. Each scenario contained 1-3 specific subtasks. Figure 2 displays an example scenario and its subtasks whereas Additional File 1 contains all scenarios and subtasks used in the application of CWIS to MBC consultation procedures.
Step 5: Group Testing with Representative Users
In Step 5, the testing scenario materials developed in Step 4 are presented to small groups of individuals (i.e., 4-6) who represent target users of the implementation strategy. User characteristics identified in Step 1 (Determine Preconditions) should guide the recruitment of users who reflect primary user groups, or the core individuals who are expected to use a strategy or product [46,54]. The primary users of implementation strategies often include both the targets of those strategies as well as the implementation practitioners who deliver them. For instance, testing components of a leadership-focused implementation strategy (e.g., Leadership and Organizational Change for Implementation; [55]) could include representative leaders from the organizations in which the strategy is likely to be applied as well as leadership coaches from the implementation team. If a broader strategy (e.g., change record systems; [5]) is selected for testing, then multiple groups reflecting different user types may need to be recruited (e.g., clinicians, supervisors, patients). Regardless, it is advantageous to construct testing groups that reflect single user types to allow for targeted understanding of their needs. In addition to primary users, secondary users (i.e., individuals whose needs may be accommodated as long as they do not interfere with strategy’s ability to meet the needs of primary users) may also be specified.
CWIS sessions are led by a facilitator and involve presentation of a scenario/subtask, quantitative ratings, and open-ended discussion, with notes taken by a dedicated scribe. CWIS uses note takers instead of transcribed audio recordings to help ensure that it is pragmatic and efficient. First, each scenario is presented in turn to the group, followed by its specific subtasks. For each subtask, participants reflect on the activity, have an opportunity to ask clarifying questions, and then respond to three items about the extent to which they anticipate being able to (1) know what to do (i.e., discovering that the correct action is an option), (2) complete the subtask correctly (i.e., performing the correct action or response), and (3) learn that they have performed the task subtask correctly (i.e., receiving sufficient feedback to understand that they have performed the right action). They record these ratings using a 1-4 scale (1. a very small chance of success, 2. a small chance of success, 3. a probable chance of success, or 4. a very good chance of success) independently on a rating form (Additional File 2). Next, participants sequentially provide verbal justifications or “failure/success stories,” which reveal the assumptions underlying their rating choices [28]. Any anticipated problems that arise are noted as well as any assumptions made by the participants surrounding the strategy, its objectives, or the sequence of activities. Finally, having heard each other’s justifications for their ratings, the participants engage in additional open-ended discussion about the subtask and what might interfere with or facilitate its successful completion. During this discussion, note takers attend specifically to additional comments about usability issues for subsequent classification and prioritization in Step 6.
At the conclusion of a CWIS session, participants complete a quantitative measure designed to assess the overall usability of the implementation strategy. A wide variety of quantitative measures exist to identify usability problems for digital products, but none have been designed for implementation strategies. For CWIS, our research team adapted the widely-used 10-item System Usability Scale [56,57] for use with implementation strategies. The resulting Implementation Strategy Usability Scale (ISUS; Additional File 3) is CWIS’ default instrument for assessing overall usability and efficiently comparing usability across different strategies or iterations of the same strategy.
Example application. Potential primary users included both clinicians and MBC expert consultants, but only clinicians were selected for testing given the modest goals of the CWIS pilot and because the deliverers of the consultation protocol (i.e., expert consultants) were already directly involved in its development. CWIS participants (n=10) were active mental health clinicians who primarily provided services in K-12 education settings and had completed a self-paced, online training in MBC (see Step 1: Preconditions). Participating clinicians came from a variety of school districts and agencies, were 90% female, and had been in their roles for 2-18 years. Table 2 displays all participant demographics. Human subjects approval was obtained by the University of Washington Institutional Review Board and all participants completed standard consent processes.
A facilitator conducted two CWIS sessions (including 4 and 6 clinicians, respectively) and guided each group through the six scenarios and eleven associated subtasks (Additional File 1). As detailed above, users were asked to rate each task based on their personal anticipated likelihood of success discovering the correct action, likelihood of performing that action, and likelihood that they would know about the success or failure of their action. Average success ratings for each subtask were calculated as the mean of all questions and user ratings and incorporated into a matrix cross-walking the team’s original importance ratings with the success ratings generated by users.
Next, clinicians provided open-ended rating justifications and engaged in additional group discussion, including describing why some subtasks were considered more difficult than others and what aspects of subtasks they found particularly confusing or difficult. Discussion was recorded by the note taker for subsequent synthesis by the research team. Following the walkthrough sessions, users completed the ISUS in reference to all aspects of the consultation protocol to which they had been exposed.
Step 6: Usability Issue Identification, Prioritization, and Classification
Within CWIS, usability issues are identified, classified, and prioritized using a structured method to ensure consistency across applications. All usability issues are identified based on the results of Step 5 testing.
Identification and prioritization. In CWIS, identification of usability issues occurs in accordance with recent guidance articulated by the University of Washington ALACRITY Center [4,58] for articulating usability issues for complex psychosocial interventions and strategies. Specifically, usability issues should include (1) a brief description (i.e., a concise summary of the issue, focused on how the strategy fell short of meeting the user’s needs and its consequences), (2) severity information (i.e., how problematic or dangerous the issue is likely to be on a scale ranging from 0 [“catastrophic or dangerous”] to 4 [“subtle problem”], adapted from Dumas and Redish [59]), (3) information about scope (i.e., the number of users and/or number of components affected by an issue), and (4) indicators of its level of complexity (i.e., how straightforward it is to address [low, medium, high]). The consequences of usability issues (a component of issue descriptions) may either be explicitly stated by participants or inferred during coding. Determinations about severity and scope are informed by the extent to which usability issues were known to impact participants’ subtask success ratings (Step 5). Usability issues that are severe and broad in scope are typically the most important to address. Those that are also low in complexity may be able to be prioritized for the most immediate changes to the strategy because they may be the easiest to immediately improve [60].
Classification. In CWIS, all identified usability problems are classified by the research team using a consensus coding approach and a framework adapted from the enhanced cognitive walkthrough articulated by Bligård and Osvalder [28]. The first category includes issues associated with the user (U), meaning that the problem is related to the experience or knowledge a user has been able to access (e.g., insufficient information to complete a task). Second, an implementation strategy usability problem may be due to information being hidden (H) or insufficiently explicit about the availability of a function or its proper use. Third, issues can arise due to sequencing or timing (ST), which relates to when implementation strategy functions have to be performed in an unnatural sequence or at a discrete time that is problematic. Fourth, problems with strategy feedback (F) are those where the strategy gives unclear indications about what a user is doing or needs to do. Finally, cognitive or social (CS) issues are due to excessive demands placed on a user’s cognitive resources or social interactions. Usability issue classification is critical because it facilitates aggregation of data across projects and allows for more direct links between usability problems and potential implementation strategy redesign solutions. For instance, user issues may necessitate reconsideration of the target users or preconditions (e.g., amount of training/experience) whereas cognitive or social issues may suggest the need for simplification of a strategy component or enhanced supports (e.g., job aids) to decrease cognitive burden. Categories are not mutually exclusive, so a single usability issue may be classified into multiple categories as appropriate.
Example application. Following testing, the ratings and notes from each CWIS session were independently reviewed and analyzed by members of the research team who identified usability issues by independently identifying issues and then meeting to compare their coding, refine the list, and arrive at consensus judgments [61]. Next, they independently rated issue severity and complexity. Outcomes of the application of CWIS Step 6 to the MBC consultation protocol are presented in the results below.