We demonstrate the important and practical use of SP methodology for developing implementation strategies for a new EBP: 6 algorithms designed to address common and challenging behaviors associated with long-term opioid therapy (LTOT) developed by Merlin and colleagues.14 These algorithms were developed using Delphi methodology15,16 to find consensus on how to respond to behaviors such as missing appointments with clinicians prescribing the opioid, taking more opioid than prescribed, and substance use. One of the algorithms is included as an example in Fig. 1. We conducted SP sessions with providers using 6 cases, one for each algorithm. These SP sessions were followed by one-on-one structured interviews with questions mapping onto domains from the CFIR. [Figure 1: “Other Substance Abuse” Algorithm]
2.a Case development
We developed 6 SP cases. Each case simulated a patient exhibiting a unique concerning behavior addressed by the algorithms (see Table 1 outlining the behaviors). The cases were written with unfolding steps to represent three visits with a provider, because the algorithms guide decision points that would normally occur in subsequent follow-up visits in real-life practice (Fig. 1). The unfolding nature of the scenarios was piloted early in the case development process to ensure feasibility.
Table 1
Case | Algorithm |
1 | Missing Appointments |
2 | Taking Opioids for Symptoms Other Than Pain |
3 | Using More Opioid Medication Than Prescribed |
4 | Asking for an Increase in Opioid Dose |
5 | Alcohol Use |
6 | Other Substance Use |
The algorithms created by Merlin, Young, Azari, et al. 2016 also addressed “Aggressive Behavior”; however, based on feedback from the patient-provider advisory board, we did not review the algorithm in the present study. |
[Table 1]
Cases were next reviewed by a Patient-Provider Advisory Board (PPAB) consisting of 3 patients with lived experience with opioids, 4 researchers (among whom are PCPs familiar with caring for patients with opioid misuse disorder), and a primary care provider with familiarity with providing care for patients with opioid misuse. Cases were edited based on feedback from the PPAB. In concert with the review of the 6 cases, the PPAB reviewed the instructions which provided context, expectations for patient interactions, and training on the algorithms (See Appendix). Finally, cases and instructions were piloted with an SP and a provider outside of the panel. During this pilot, a physician with topical expertise was recruited to interact with SPs portraying two cases over three subsequent visits on a remote platform. This pilot helped to further develop the other five cases in structuring how participants would be oriented, updated, and guided through the simulations.
2.b SP training
SPs in the University of Pittsburgh School of Medicine SP Program receive foundational training in case portrayal, providing feedback, supported physical exam training, and checklist scoring. This 16-hour onboarding combines both active training and also guided observation of SP activities. It prepares SPs to identify, recognize, and reward learner skill in portrayal, and to record it faithfully in assessments.
Four experienced SPs were recruited from the University of Pittsburgh SP program to portray the patients exhibiting misuse behaviors. To allow rotation, redundancy and information sharing, the SPs worked in pairs for each case, alternating the role of moderator and patient. When not portraying the patient, the SP acted as a moderator by providing clinicians with inter-visit updates in accordance with what the clinicians ordered in the first session and noted the passage of time between visits. A fifth experienced SP was recruited to proctor the event – orienting the participants and researchers as they arrived, running the Zoom sessions, and serving as a backup should one of the other SPs not be able to participate. They also were given an overview of case content, portrayal, and event structure. SPs were provided with case materials a week in advance of the portrayal date, were able to ask questions over email, and completed a case-specific training to align portrayal with parameters provided in the inter-visit updates with SP staff in the 45 minutes preceding the simulation. The SP program follows the Association for Standardized Patient Educators Standards of Best Practice, which “were written to ensure the growth, integrity, and safe application of SP-based education practices.”9
2.c Description of session for participants
Clinical participants were emailed information and instructions about the event prior to participating in the session (see Appendix). All sessions were held virtually via the Zoom interface due to the COVID pandemic. During the sessions, there was a brief orientation for participants. The orientation included: (1) A brief training in how to use the algorithms; (2) An overview of how to approach the simulated interaction (i.e., as close to real practice as possible); and (3) An overview of the one-on-one interview that would follow to discuss the approaches to implement the management algorithms.
Participants then moved into Zoom breakout rooms to begin their patient encounters. Participants were given up to 60 minutes to have their 3 distinct visits per patient. There was a 15-minute break, and then another 60 minutes for the second patient scenario.
For each of the 60-minute SP scenarios, participants were told that they were about to see a patient who was being seen by one of their partners (Dr. Williams) who recently left the practice. Dr. Williams had started the patient on opioid therapy and had an opioid agreement with the patient. Participants were given a copy of Dr. Williams’ last progress note and the opioid agreement prior to meeting the patient. After reviewing this information, the clinicians joined a Zoom breakout room with the SP portraying their patient. Once the provider ended the first encounter, the portraying SP turned off their camera, and, to reflect the passage of time between visits, the moderator gave the clinicians the results of any testing they ordered and any information about the patient that had changed between the last and next visit. The provider indicated when they were ready to start the next encounter. This process was repeated between the second and third encounter.
2.d Data Collection: Semi-Structured Interviews
Immediately after they interacted with the SPs, each participant completed a one-on-one interview to reflect on and assess the experience, as well as to provide feedback on how the algorithms should ultimately be integrated into practices like theirs. Interviews were conducted by experienced qualitative data specialists associated with Qualitative, Evaluation and Stakeholder Engagement Research Services (Qual EASE) at the University of Pittsburgh. Interviewers used a semi-structured interview guide developed by the research team that covered the following domains: (1) Assessment of their orientation to the algorithms, including training; (2) Assessment of their interaction with the SPs; (3) Assessment of and opinions on the algorithms; and (4) Description of how they thought the algorithms would operate in their practices, and how they could best be implemented there. Interviews were conducted on Zoom and recorded.
Questions and further probing were used to best assess how the algorithms could be implemented in their practices, which map onto several CFIR domains and constructs as shown in Table 2.
Table 2
Mapping of CFIR domains and constructs in interview guide
CFIR Domain (definition in parenthesis) | Selected Constructs from Domain | Corresponding Question(s) |
Innovation (Characteristics of “the thing” being implemented, i.e., the innovation) | Innovation Design | What is your general impression of the algorithms? Probes: What, if anything, about them is useful? What, if anything, about them was unhelpful? |
Innovation Adaptability | How would the description (or orientation) of the algorithms that we presented need to be tailored to your practice? What, if any, edits did you find yourself wanting to make to the algorithms? |
Outer setting (The setting in which the inner setting exists, e.g., the health system and community in which the practice sits) | Local Conditions | What clinic (or health system) level factors would make implementing these algorithms easy? What factors would make it difficult? |
Partnerships and Settings | How, if at all, would insurance company policies impact your ability to implement these algorithms? |
Inner setting (The setting in which the innovation will be implemented, e.g., the individual practice) | Work Infrastructure | How would you fit the algorithms into your workflow at your practice? How would you like to use these algorithms in your day-to-day practice (if you would like to use them – if you wouldn’t like to use them, we’d love to hear about that too)? |
Relational Connections | Next, I would like to talk about implementation of these algorithms in practices like yours. First, tell me about your practice. Probes: How many providers are there? How many patients are in it? How would you describe your patient base? |
Information Technology Infrastructure | What would be the benefits/drawbacks of incorporating the algorithms into the EHR? How would you do it? |
Individuals (The roles and characteristics of individuals involved in or affected by implementation of the innovation) | Innovation Recipients | How do you think patients will respond to these algorithms, and what types of things do clinicians need to do make this acceptable to patients? What would make it easy or difficult to implement these algorithms with a given patient? |
High-Level Leaders | How could we best get practice and PCP buy-in to implement these algorithms? |
Mid-Level Leaders | What would make it easy or difficult for individual PCPs to implement these algorithms? |
Implementation process (The activities and strategies used to implement the innovation) | Tailoring Strategies | This morning, we provided you with a brief orientation (or “introduction”), or description of the algorithms. What was good about the initial description? What could have been better? How would the description (or orientation) of the algorithms that we presented need to be tailored to your practice? How would you want doctors in your practice to be oriented [trained] to the algorithms? If these algorithms were implemented in your practice, what type of ongoing training, if any, would you and your colleagues want to receive? What kind of interactive assistance would be helpful? (Having a support person in clinic? Over the phone?) What kind of patient-level support would be helpful? (i.e., What kind of supports do you need for your patients in order for you to use the algorithms effectively?) (Infrastructure? Financial?) |
[Table 2]
Within one week of their completion, the qualitative methodologist associated with the project wrote a summary of each interview, which was forwarded to the study team so that they could begin to determine what modifications might need to be made to the algorithms and could plan for implementation. Following that initial summary, interviews were transcribed verbatim with identifying details redacted. Under the supervision of the qualitative methodologist, experienced analysts at Qual EASE inductively developed a codebook reflecting the content of the interviews, with coding categories reflecting the four areas of the interview guide mentioned above. Use of the codebook was practiced on two transcripts by 2 Qual EASE coders, following which they both applied the codebook to the remaining 10 transcripts. Cohen’s Kappa statistics were used to assess intercoder reliability; the average kappa score was 0.8565, indicating “near perfect” agreement. The primary coder for the project then conducted a content and thematic analysis, which was reviewed by the qualitative methodologist, and shared with the study team to better facilitate implementation planning.
2.e Data Collection: Development of Implementation Bundle
The final step to developing the implementation bundle – which included materials for initial training, an online algorithm interface, e-consultation support, and electronic health record (EHR) integration for the 6 algorithms – was to review notes from the structured interviews. The bundle was then drafted and reviewed by the PPABs and co-Is.
2.f Recruitment and Study sample
Recruitment emails were sent to Community Medical Inc. (CMI). CMI is a network of 400 primary care and specialty physicians who practice throughout western and central Pennsylvania and provide care for over 495,000 patients. The practices cover a large geographic area; however, the network is predominantly in Allegheny County. Participants were required to be primary care clinicians at CMI practices and at least 18 years of age. Each of the clinicians were recruited to participate in two virtual patient evaluations followed by one-on-one interviews. The experience lasted approximately four hours and clinicians were paid $1,000 for their participation. We ultimately recruited 12 PCPs to participate in the virtual experience.