Data Collection & Analysis
An overview of the steps taken to collect and analyze the data to answer the three research questions is in Fig. 1. Here we describe the specific methods used for each research question.
Research Question 1
What are the most important strategies used by the value champions?
Data Collection: We conducted a developmental evaluation of the fellowship program by collecting qualitative data from eight sources: posts on a nonpublic social media platform established for the fellowship program, interviews with individuals in each fellow's organization who worked with them on their project, a midpoint focus group with the fellows, notes from a midpoint review of data and observations by members of the Fellowship training program project team, mentor-mentee notes from monthly check-ins, notes from the monthly webinars, notes from meetings with the faculty to develop and refine the training curriculum, and notes from the Capstone Meeting when fellows presented results of their projects. We used template analysis to analyze source documents from these eight data sources employing a code list drafted by LP and iteratively refined and agreed upon by the project team (22). Five coding memos focused on central aspects of the fellows’ projects and fellowship experience were developed: 1) project implementation strategies, 2) sequencing of project steps, 3) training needs and gaps, 4) lessons learned, and 5) insights into preparing new clinical value champions.
Analysis: Guided by the Expert Recommendations for Implementing Change (ERIC) compilation of intervention strategies, two team members (MP, LP) reviewed the five coding memos to identify strategies used by the fellows during their projects (1). Strategies from the coding memos that appeared to match items in the ERIC taxonomy formed an initial list that MP and LP revised and finalized through discussion. Twelve of the 73 ERIC strategies were found to be represented across the fellows’ projects: audit and provide feedback, build a coalition, conduct educational meetings, conduct educational outreach visits, conduct local consensus discussions, conduct a local needs assessment, develop a formal implementation blueprint, provide facilitation, inform local opinion leaders, intervene with patients/consumers to enhance uptake and adherence, involve patients/consumers and family members, and use clinical reminders.
Next, we surveyed the six fellows, asking them to rank-order the 12 strategies by relevance for the success of the projects (with 1 = most important and 6 = least important). In addition, we asked them to indicate which strategy they would be willing to discuss further during an interview. The survey was created and administered using the RedCap web application (23). A runoff survey resolved a tie in the rankings.
Research Questions 2 & 3
How did they employ their strategy? What strategies/approaches were common across the projects?
Data Collection: Three team members (MP, JM and JW) conducted 30-minute interviews with each fellow about the strategy they chose for their interview. Two interviewers attended each interview. All interviews were conducted using a set of common prompts: 1) Tell me about your strategy and how you used it, 2) I want to hear about your thinking as you planned to use this strategy, 3) Was this strategy used earlier or later in your project and why, 4) How did this strategy work with other strategies or pieces of your project, and 5) Did you encounter any barriers and how did you approach them. Interviews were conducted via the Microsoft Teams platform, recorded with consent, and transcribed in real-time using Team’s automated function.
Analysis
Interview transcripts were cleaned, reviewed, and coded by three team members (JM, JW, MP) using Atlas.ti software. One team member checked and cleaned each transcript while a second provided quality assurance and a third coded each transcript. Transcripts were first coded using a simple/high-level process that created a code as a comment for each “unit of meaning,” defined as a section of text that all fell into a common theme. Units of meaning could overlap or have multiple codes applied to them. Two individuals coded each transcript independently using this process and then met to review their codes and develop/refine a final list of codes.
Before completing a second round of coding in Atlas.ti, LP, JW and MP iteratively refined the code list, when possible aligning codes with strategy descriptions in ERIC and renaming code groups accordingly. The purpose of this step was to identify, within and across interviews, which strategies were described by the fellows and to connect them with the main strategy that was the topic of the interview. With the code list finalized, the interview transcripts were recoded. LP applied thematic analysis to the coded transcripts and drafted a coding memo collecting themes surfaced across interviews, along with illustrative quotes. The project team reviewed and refined the memo, which was shared with the fellows for feedback.