In developing VMR, we focused on three distinct elements of its design: conference structure, opportunities for learner participation, and facilitator teaching strategies. These three elements were the most likely to support the achievement of our aims.
The structure of VMR (Fig. 1) is similar to morning reports in United States-based internal medicine residency programs (1, 2, 10–13) and draws on evidence supporting the benefits of peer-assisted learning while also ensuring the involvement of experts in DR teaching. (14–16)
In VMR, a participant volunteers to present a patient case and two health professions students or post-graduate trainees volunteer to discuss the case alongside two clinician-educators, who serve as facilitators. At the start of VMR, discussants and case presenters introduce themselves, their training level, and their location. Case presentations include sequential aliquots of clinical information followed by the final diagnosis. Discussant-facilitator pairs alternate sharing their DR aloud after each aliquot. In addition, an unlimited number of participants discuss the case using the video conferencing platform’s chat function. In real time, two CPSolvers team members transcribe case details and teaching points on a virtual whiteboard. (17) VMRs are recorded and posted online for asynchronous viewing. To prioritize explicit teaching of DR, VMR is an unscripted case conference. (12) Facilitators, discussants, and participants are unaware of the case details, including the final diagnosis. We, as others have previously written, believe this structure offers the best opportunity to explicitly teach DR by allowing all participants to think through the case in real-time, similar to how DR occurs in a clinical environment. (12)
Opportunities for Learner Participation
We designed opportunities for learner participation in VMR by adapting concepts from experience-based learning theory, which views supported participation as fundamental to learning. (18) Applying these concepts to DR education in case-based teaching conferences, learning DR does not come from a facilitator reciting information to a learner. Rather, it happens when learners engage in the practice of performing DR through varied levels of participation that range from observation to direct contribution. (18) Specific opportunities for learner participation in VMR include
Passive participation: watching VMR without contributing to the discussion
Chat-based participation: watching VMR and actively contributing to the discussion via the videoconferencing platform’s chat function
Presentation-based participation: preparing and presenting a case
Active participation: sharing one’s thinking aloud by discussing alongside a facilitator
These opportunities for supported participation create varied levels of engagement that accommodate a variety of learners who may have varied DR skills, educational priorities, and comfort with the traditional form of participation that involves being called-on by facilitators. (2)
Teaching Strategies: We used teaching strategies informed by two theoretical frameworks that play an important role in teaching and learning DR: information-processing theories and situativity theories. (19–22) The former emphasize the importance of knowledge organization, while the latter highlight the role of contextual factors in learning and practicing DR. (19–22)
Specific teaching strategies comprised previously described tactics, including connecting case discussions back to core tenets of DR, such as diagnostic schemas, illness scripts, and Bayesian reasoning, and giving feedback to trainee discussants that expands and refines their knowledge structures and decision making. (20–25) For example, in articulating their reasoning aloud, facilitators often explicitly discussed a diagnostic schema, highlighted illness scripts of diagnoses under consideration, and integrated probabilistic reasoning into their teaching. Additionally, to help decrease the cognitive load for learners, facilitators asked learners to focus on one specific piece of data that they perceived to carry important diagnostic information. (26) Facilitators also incorporated teaching related to the role of contextual factors in DR. This included asking learners to consider how their thought processes might differ if certain contextual details of the case changed (e.g., if patient communication was limited because of acute encephalopathy or if the patient presented to clinic rather than the hospital). (19–22) Finally, at the end of a case, trainee discussants had the opportunity to reflect aloud on their reasoning under the guidance of facilitators. (27, 28)
We evaluated our program using a survey that included both open and closed-ended questions. (29) We followed guidelines for survey development to address content and construct validity. (30) We created and administered the survey using Qualtrics. We also collected data from VMR sessions, including total attendees and the number of chat-based participants. Descriptive statistics were performed (mean, standard deviations) using Microsoft Excel.
Individuals who attended VMR were eligible to take the survey. Participants received a link to the survey via email. Data collection occurred between June 9th, 2020 and July 29th, 2020. To maximize the response rate, we sent weekly reminders to the email list for three weeks. No financial incentives were offered. Participants responded anonymously. The Institutional Review Board at the University of California, San Francisco reviewed the study and deemed it exempt.
Survey items (Additional File 1) focused on accessing case-based teaching conferences outside of VMR, participant perceptions of the educational value of VMR, and VMR’s impact on participants’ confidence in performing DR. Survey items related to the construct of DR were informed by information processing theories, such as cognitive load and script theory, and situativity theories, such as situated cognition. We piloted survey items with eight CPSolvers team members. One team member trained in cognitive interviewing (S.L.) used previously published guidelines to perform one-on-one cognitive interviews with five VMR participants who provided verbal feedback. (30) Pilots and cognitive interviews led to changes in item wording and removal of two questions that confused participants.
Two coders (J.C.P & L.C.S.) used qualitative content analysis to manually code data and identify themes. (31) Coders actively generated themes by categorizing codes. The two coders coded transcripts line-by-line using a constant comparative process to organize responses into codes and themes. They discussed and resolved conflicts of codes and themes with a third person (S.N.).