Student participation in this workshop evaluation was strong with pre-workshop response rate of 98% (n = 55) and post-workshop response rate of 91% (n = 51).
Workshop improved student self-reported skills and confidence in ear examination
Student self-reported confidence in completing an otoscopic examination (Question 2 on survey) improved significantly from an average score of 2.98 ± 1.14 (SD, n = 55) in the pre-test to an average score of 3.90 ± 0.77 (SD, n = 51) (p < 0.0001) in the post-test; difference of 31%. Student self-reported ability to use the otoscope with respect to hand-eye coordination (Question 8 on survey) also improved significantly, a 18% change from an average score of 3.25 ± 1.03 (SD, n = 55) on the pre-test to an average score of 3.84 ± 0.75 (SD, n = 51) (p < 0.001). Finally, the self-reported scores on ability to use the otoscope to visualize the tympanic membrane (Question 9) improved by 22%; pre-test score of 3.22 ± 1.02 (SD, n = 55), post-test score of 3.92 ± 0.8 (SD, n = 52) (p < 0.001) (Fig. 1).
Workshop improved student self-reported skills and confidence in eye examination: Self-reported scores in the questions assessing confidence in completing a fundoscopic examination (Question 1), ability to use an ophthalmoscope to measure either hand-eye coordination (Question 6), and ability to visualize retinal structures (Question 7) improved significantly post-workshop. Using a five-point Likert scale, students reported a 120% difference in Question 1; pre- test average score: 1.5 ± 0.78 (SD, n = 55), post-test average score: 3.27 ± 0.9 (SD, n = 51) (p < 0.0001). Students also reported an almost 40% improvement from the pre-test (2.56 ± 0.9; mean, SD; n = 55) to the post-test (3.56 ± 0.86; mean, SD; n = 51) in Question 6 (p < 0.0001) and a 78% improvement in Question 7 (pre-test: 1.87 ± 0.71 (mean, SD, n = 55) post-test: 3.33 ± 0.87 (mean, SD, n = 52)) (p < 0.0001) (Fig. 2).
Workshop improved student self-reported confidence in comprehensive head and neck exam skills
We compared the confidence the students felt in completing a comprehensive head and neck examination (Question 3 of survey) and found a statistically significant (p < 0.0001) change of 36% in the pre and post scores on this particular question. The average pre-test score was 2.82 ± 1.05 (SD, n = 55) and average post-test score was 3.84 ± 0.87 (SD, n = 51) (Fig. 3).
Workshop improved student self-reported confidence in identifying normal vs abnormal eye and ear structures
Using two questions on the survey, we assessed students’ self-reported confidence in identifying upon physical exam, normal (Question 4) and abnormal (Question 5) eye and ear findings. We found a statistically significant difference in the post-scores compared to the pre-workshop scores in both questions (p < 0.0001). For Question 4, students reported an average score of 2.74 ± 1.03 (SD, n = 55) and over a 30% improved average post score of 3.6 ± 0.82 (SD, n = 50). In addition, for question 5, students reported a pre-score of 2.02 ± 0.86 (SD, n = 55) and post-score of 3.39 ± 0.77 (SD, n = 51), an improvement of 68% (Fig. 4).
Workshop improved student self-reported aggregate confidence and skills in conducting HEENT examination components
We compared the pre-test and post-test score averages of each student’s responses on questions 1–5, that asked about confidence in conducting different aspects of HEENT examination. We found a 49% increase (p < 0.0001) in post-test scores for aggregate self-reported confidence levels from an average score of 2.41 ± 0.72 (SD, n = 55) in the pre-test to an average score of 3.6 ± 0.63 (SD, n = 51) in the post-test (Fig. 5). To compare the changes in aggregate skills in conducting HEENT exam, we compared pre-test and post-test score averages for questions 6–9. Student self-reported skills increased by almost 35% from the pre-test, 2.73 ± 0.72 (SD, n = 55) to 3.67 ± 0.68 (SD, n = 52) on the post-test (p < 0.0001) (Fig. 5).
Improvement in skills and confidence correlates with grades achieved after workshop
Students were graded by faculty on their head and neck exam skill by Program faculty one month after the workshop using rubrics previously established as part of the patient evaluation course. The student scores ranged from a minimum of 82.1% to a maximum of 100%, with the mean score being 95.8 ± 4.2% (SD, n = 55).
Workshop was favorably received by the students
To further gauge student feedback on the quality of the workshop, we devised a mixed-methods feedback survey (Table 2) that was administered with the post-test survey. Both quantitative and qualitative data were collected. The questions on the feedback survey were scored on a 5-point Likert Scale, with a score of 1 being highly disagree to a score of 5 being highly agree. The average score of each question, and the percentage of students who responded under each choice were calculated for quantitative data. 80% of the students who participated in the workshop completed the feedback survey.
The range of scores on the questions were from 3.73 to 4.47. 92% of the students agreed or highly agreed that the workshop was useful. 81% of students felt that the workshop was realistic and 75% of them agreed that the workshop was comparable to in person learning, but only 68% felt it was superior. 94% of students agreed that the workshop provided a positive learning environment (Table 3).
Table 3
Results of qualitative survey questions
|
Highly disagree
|
Disagree
|
Neutral
|
Agree
|
Highly agree
|
Workshop was useful
|
0%
|
2%
|
6%
|
46%
|
46%
|
Workshop was realistic
|
0%
|
8%
|
11%
|
42%
|
40%
|
Workshop was comparable
|
2%
|
6%
|
17%
|
45%
|
30%
|
Workshop was superior
|
8%
|
9%
|
15%
|
38%
|
30%
|
Positive learning experience
|
0%
|
2%
|
4%
|
40%
|
55%
|
To collect qualitative data, students were asked to leave general comments about the workshop. A total of 13 comments were left on the feedback survey. Seven of the comments were students saying they appreciated the workshop. Some of the comments included:
- “I really enjoyed using the models because the professors could guide us better with identifying structures and exam findings. I feel like if it was just standardized patients, I would feel like a fish out of water. I really appreciated the professors being there to help with the examples and giving us their real-world experience. I really appreciated this opportunity” and
- “I really appreciated this workshop! Having mannequins allowed us to spend a lot longer looking for structures and getting our technique down without having to take a break from shining light with real people. Thank you!”
Three of the 13 commenters felt that more time was needed for the workshop and they felt rushed: “Great workshop but I feel that it was too short for the size of the groups brought in at one time. I had to rush through some of the models”. Of note, 4 comments alluding to needing more time in the workshop or access with the models were correlated with survey responses of “disagree or highly disagree” to the workshop being comparable or superior to in person learning.