Approval for the study was obtained from the Human Subjects Research Ethics Board at Western University. Written consent was obtained from all consultants taking part in audio monitoring (“monitoring consultants”) as well as from participating third year medical students at the beginning of their oncology clinical rotation. Consultant assignment to the monitoring arm of the study was by volunteering rather than by randomization of all faculty, as there were at the outset insufficient faculty willing to take part in the study and potentially be randomized to the intervention arm.
Verbal consent was obtained from any patient being seen in a room in which the microphone was present, by the clinic nurse putting the patient in the room.
Eligible trainees were in their third year of medical training at the Schulich School of Medicine and Dentistry at Western University, and enrolled in a two-week clinical oncology selective block. Typically students rotate through outpatient oncology clinics of 3–5 medical and radiation oncology consultants during this time. Information about the study was included in the routine orientation on the first day of the rotation by the rotation supervisors. Students were told that any patient visit during the second week of their rotation might be live monitored via audio feed from the patient room if they worked in a monitoring consultant’s clinics, and they were aware of which consultants were monitoring consultants
Monitoring was done via a Williams Sound PPA T46 FM transmitter, with the battery-powered microphone and sending unit hidden in a tissue box in the patient clinic room, and the monitoring consultant clipping the receiver to their clothing or lab coat and plugging in headphones when they wanted to listen in from the workstation or hallways outside the room. Patients were made aware of the microphone in the tissue box by the nurse accompanying them into the clinic room, and instructed on how to turn it off if desired. Students were not made aware that a specific room contained the microphone that day. Monitoring consultants, during the monitored week with the student, were free to use the device as they chose – either to listen to a full encounter or part of it, while physically present at their workstation outside the patient room. In our centre there are often several occasions in a busy clinic when a consultant has a few minutes of inaction waiting for a blood draw, or for a room to become available. Time spent listening to students via the audio monitor thus could be taken from previously unused minutes waiting for a clinic room.
Written rotation feedback was completed online as per usual practice, with the addition of an initial first feedback documentation after one week of working with a monitoring consultant, before monitoring started during the second week. This was done to allow for longitudinal sequential comparison of feedback given by the same consultant to one student without and with the benefit of using the audio monitor.
Trainees were told which written feedback comments were based on a monitored clinic experience at their end-of-rotation evaluation meeting, and asked to comment on the perceived usefulness of the feedback they received after monitored encounters, compared to that received as per usual practice. Monitoring consultants and students completed a survey rating their satisfaction with the audio listening process and the giving or receipt of feedback, respectively.
Written feedback comments were de-identified and rated by two investigators (MS and HC), one of whom was not clinically involved, as strong, weak, or neither strong nor weak based on an exploratory rubric modified from Nesbitt et al. with permission(18), considering also Lefroy et al’s definitions of “DO’s” and “DON’Ts” of feedback (Table 1). A third blinded investigator who was not a monitoring consultant (KP) resolved any evaluations that were discordant between rating investigators (MS and HC).
Descriptive statistics were generated for all medical students (n = 20), all monitoring consultants (n = 7) and for all evaluations (n = 101). Results were stratified by (1) audio monitored vs. not audio monitored and (2) encounter type (audio monitored; not audio monitored but with a monitoring consultant; not audio monitored and a non-monitoring consultant), and compared using the Chi-square test, or Fisher’s Exact test as appropriate. All statistical analysis was performed using SAS version 9.4 software (SAS Institute, Cary NC, USA) using two-sided statistical testing at the 0.05 significance level.
Table 1. Exploratory rating framework for weak vs strong written feedback – Feedback Quality Evaluation Form.
♣ Lacking performance content altogether
o E.g.“Saw many cases of lung cancer”
o E.g. “A pleasure to work with”, “Functions at PGY-1 level”
o E.g.“Spends much time studying after clinic”
♣ Based on second-hand information
o E.g. “Caused pt in Dr. X’s clinic to cry”
♣ Predominantly evaluative without specific aim of performance improvement
o E.g.“above average level of knowledge”
♦ Neither weak nor strong
♣ Mentions points of good performance in general
o E.g.“Good communicator”
♣ Mentions areas for improvement in general
o E.g. “Should take more time with patients”
♣ Specific areas for improvement
o E.g.“Explore symptoms in more depth during review of systems”
♣ Based on direct observation
o E.g.“Did not respond to patient comments about anxieties/worries on several occasions – work on questioning further to validate and explore patient concerns”
♣ Relevant to course goals
o When presenting case history, lung cases were better presented than prostate cases. Review prognostic features of CA prostate important in initial consultation discussion
♣ Explains the gap [between observed performance and explicit standard]
o At this point in training would be expected to develop a differential diagnosis of at least 3 conditions or etiologies underlying a presenting symptom. Tends to focus only on the most likely cause – encouraged to think of other potential causes as well