DOI: https://doi.org/10.21203/rs.2.9687/v1
Student selection is an important step in educating tomorrow’s doctors, and during this process it is critical that candidates’ cognitive and non-cognitive attributes be assessed in order to select candidates with the qualities required of good doctors [1]. Traditional interviews are limited in terms of their validities and reliabilities, presumably because the required qualities of candidates are not clearly defined, and they are assessed using a small number of cases in a relatively short period of time [2, 3]. To overcome the shortcomings of traditional interviews, many medical schools have adopted the Multiple Mini Interview (MMI) format. This interviewing technique requires that candidates rotate through an average of six interview stations, at which differences cases or scenarios with related interview questions are presented. In the MMI, candidates are given a few minutes of time to read instructions or the scenario and questions before they enter the station and discuss the topic with the assessor [2].
Several research studies have shown MMIs are more valid and reliable than traditional interviews [2, 4-17]. When assessing non-cognitive attributes, a candidate’s qualities in one domain cannot be generalized into another due to their context specificities, and thus, it is not feasible to predict a candidate’s performance in one domain based on his/her performance in other domains [18]. For this reason, MMIs involving multiple stations and assessors are more effective for candidate assessments than traditional interviews. Despite research findings regarding the validity and reliability of the MMI, it has also been reported that the reliabilities of MMIs can vary widely depending on the way they are administered and structured [4, 8], which suggests research is needed to determine effective formats for MMIs.
Candidates’ various non-cognitive attributes has been assessed in MMIs. In particular, empathy, which is “a personality trait that enables one to identify with another's situation, thoughts, or condition by placing oneself in their situation,” [19] is an important attribute in doctors, and therefore, many medical schools assess candidates’ basic understanding of empathy at admission interviews [20]. However, research on how to assess empathy for student selection purpose is scant [20]. Thus, in the present study, we examined the feasibility of using a video-based case in the MMI. In general, MMI scenarios are presented in a paper format, but continued technologic developments offer opportunities to adopt other formats for presenting scenarios during MMIs. In particular, it has been advocated candidate’s non-cognitive attributes need to be assessed in authentic contexts [21]. Accordingly, we considered that presenting candidates with a scenario involving human interactions on a video vignette would be likely to provide richer information about interaction contexts than scenarios presented as text.
The use of videos to present cases or problems has been studied in case-based learning (CBL) and problem-based learning (PBL) situations, and these studies have shown video-based CBL and PBL are more effective than using paper-based alternatives in terms of fostering critical thinking and interest, and thus, it has been speculated video-based case presentation provides students with more authentic contexts and enhances interest and engagement [22-26]. Yet, little research has been undertaken on the use of video to asses candidate non-academic attributes in MMIs. Thus, this study aimed to analyze the feasibility of the use of video-based cases in MMIs to assess candidate’s empathic abilities by investigating its acceptability, fairness, reliability, and validity.
Study participants and setting
This study was conducted on candidates that participated in admission interviews at Dongguk University School of Medicine (DUMS), a private medical school in South Korea, for matriculation in 2019. DUMS has a four-year basic medical education program for graduate-entry students and an annual intake of 50 students. DUMS has implemented admission interviews for those that passed the initial screening stage based on considerations of prior academic achievements, including undergraduate Grade Points Average (GPA) and performance at the Korean medical school entrance exam (the Medical Education Eligibility Test). As a result, 94 candidates were selected for admission interviews, which were conducted in December, 2018.
DUMS has used MMIs for admission interviews since 2014. Interview schedules are composed of six mini interviews conducted at separate stations, and the allocated time per station is 10 minutes. There was one assessor for each station, who evaluated the candidates’ performance on a 5-point scale of 1 being “unsuitable” to 5 being “outstanding” on two to three items presented in a scoresheet. Candidates’ performances are assessed by each station score and their overall performance is determined by summing up the scores at each and every station. Previous experiences with the MMI at DUMS have shown that it is a feasible tool for student selection [27, 28].
Study design and procedures
A video-based case was developed for one MMI station to assess candidate’s empathic abilities. MMIs at the other five stations were implemented in traditional paper-based format. Three medical faculty members participated in the development of the video-based case. Two were experts in MMIs and the other was a psychiatrist, who wrote the script for the scenario. Two investigators with experience of MMIs reviewed and revised the draft scenario. The video was produced in-house and was pilot tested on a volunteer medical education graduate student. The student was asked to think aloud what she thought of the video as she watched and reported the situation was presented clearly in the video and that the dialogue was unambiguous.
The video vignette presented a fictive clinical situation where a doctor interviewed a patient who seemed to be in a depressive mood. The vignette lasted around two-minutes because of time constraints for the candidate to view and prepare for him/her to discuss it with the assessor. Candidates used a tablet and a headset to watch the video, and were asked to assess the extent to which the doctor showed empathy and elicited the feelings and views of the patient, as these are considered key elements of empathy in doctor-patient interactions [20]. During the interview, the candidate discussed with the assessor on the extent to which the doctor showed empathy in communicating with the patient and about the importance of empathic communication in patient-doctor relationship.
Data on candidate perceptions and performance in the MMI stations were obtained and analyzed to investigate its acceptability, fairness, validity, and reliability as evidence regarding feasibility of the test. Acceptability by candidates was examined by investigating their perceptions of the MMI and the video-based case using a post-MMI questionnaire. The questionnaire used in this study consisted of 41 statements with Likert-type responses ranging from Strongly Disagree (1) to Strongly Agree (5). The items in the questionnaire were classified as follows. The first section included 7 items regarding candidate demographics and backgrounds. The second section consisted of 17 items that elicited candidates’ overall perceptions of the MMI. These items were adapted from the instrument developed by Eva [6] and translated into Korean by Kim et al. [28], and have been used in other studies [27, 28]. The third section included 12 statements on respondent perceptions of the video-based case used in the empathy station, which consisted of the following four sub-scales: (a) level of difficulty in understanding the situation presented in the video (3 items), (b) authenticity of the situation portrayed in the video (3 items), (c) interest (3 items), and (d) overall satisfaction (3 items). This section also included five items regarding candidate perceptions of the patient-doctor relationship presented in the video clip. The 17 items in this section of the questionnaire were developed by the authors and were pilot tested in the previous year with a sample of medical school applicants. The last item was a single open-ended question that elicited candidates’ overall opinions of the MMI.
The questionnaire was administered during a wrap-up session conducted immediately after all interviews had ended in the morning and afternoon sessions. Participation in the study was voluntary and consent was implied by return of the questionnaire as responses were collected anonymously. An ethical review was conducted and the study was exempted from the requirement for informed consent by the institutional review board of Dongguk University, Gyeongju.
Fairness of the test was assessed by means of differences in candidate perceptions of the video-based case and of the patient-doctor relationship presented in the video clip across different demographics or backgrounds. In addition to candidate perceptions, his or her test scores in the empathy station were compared across different demographics or backgrounds. Construct validity was assessed by examining the relationship of candidate scores in the empathy station with those in other stations. Moreover, we analyzed generalizability of the test to investigate it reliability.
Data analysis
Descriptive statistics were used to analyze candidate responses to the post-MMI questionnaire and their test scores in the MMI stations. Reliability of the research instrument was assessed using Cronbach’s alpha values. The independent t-test was used to compare candidates’ responses and performance with respect to gender and age, for which candidates were dichotomized about median age (25 years), and their geographic locations (urban vs. rural areas). ANOVA (analysis of variance) was used to compare candidates’ perceptions with respect to undergraduate backgrounds, which were categorized into seven groups. The G-coefficient was analyzed to investigate the reliability of this test, which indicates the proportion of variance in MMI score attributable to differences in candidates’ non-cognitive abilities [13]. The data were analyzed using SPSS version 23 for Windows (IBM Corp., Armonk, USA), and statistical significance was accepted for p values < 0.05.
Candidate demographics and backgrounds
Eighty-two questionnaires were returned, which yielded a 98.8% response rate. Twenty six of the respondents (31.7%) were female and 56 (68.3%) were male; their ages ranged from 22 to 36 years (M = 26.6, SD = 2.83). Candidates’ undergraduate backgrounds were as follows; life sciences (n = 35), engineering (n = 25), sciences (n = 15), health-related professions (n = 10), social sciences and humanities (n = 3), and others (n = 4). Thirty eight (46.3%) of them were from urban areas, whereas 44 (53.7%) were from rural areas.
Candidate perceptions of the video-based case in the empathy station
Candidates were neutral with regard to whether the empathy station required specialized knowledge (M = 2.83, SD = 1.02) and with respect to station difficulty (M = 3.17, SD = .75). Nine candidates (10.7%) answered the time allocated to prepare responses to the assessor in the empathy station was too short, whereas the remainder thought it adequate.
Table 1 shows descriptive statistics regarding candidate perceptions of the video-based case used in the empathy station and the results of the reliability analysis. Cronbach’s alpha values of the four sub-scales of candidate perceptions demonstrated acceptable internal consistency of the items. Candidates disagreed slightly with the statement that it was difficult to understand the situation presented in the video, and they agreed with the statements that the video was authentic and interesting and that they were generally satisfied with it.
Candidate perceptions of the patient-doctor relationship portrayed in the video
Table 2 describes candidates’ perceptions of the patient-doctor relationship portrayed in the video. The candidates generally evaluated that the patient-doctor relationship presented in the video was not effective in terms of emphatic communication.
Comparisons of candidate perceptions and performances
Table 3 illustrates the perceptions of candidates across different demographics or backgrounds to the video-based case for the empathy station. Candidates did not differ in their overall perceptions of the video-based case across genders, ages, geographic locations, or with undergraduate majors. Yet, male candidates were more satisfied with the video-based case than females, and younger candidates showed more interest in it than their older counterparts. Moreover, there were no differences in their perceptions of the patient-doctor relationship portrayed in the video clip across genders, ages, locations, or with undergraduate majors.
Candidates’ performances in the MMI are presented in Table 3. Candidates’ performance in the empathy station did not differ across different demographics or backgrounds nor were there differences in their overall test scores.
Table 4 shows the relationship of candidate performance among MMI stations. The candidate performance in the empathy station was not associated with that of any other stations.
Reliability analysis
Table 5 shows results of the reliability of the test using the variance components method. The G-coefficient of MMI scores was 0.83, which is in an acceptable level.
Our study showed positive acceptability of our new MMI station by the candidates. The candidates reported overall satisfaction with the use of video in the MMI and they agreed it was authentic and interesting, which concurs with that observed in other studies that used videos in PBL or CBL settings [22-26]. The candidates generally evaluated that the patient-doctor relationship presented in the video was not effective in terms of emphatic communication. This finding indicates candidates effectively identified the unempathetic communication situation shown in the video as was intended in this case.
Furthermore, the candidates did not differ in terms of their perceptions of the video-based case and of the patient-doctor relationship depicted in the video with respect to age, gender, geographic or undergraduate backgrounds. In addition, candidates’ performance at the video-based MMI station did not differ with respect to their demographics or backgrounds. These findings indicate that this video-based MMI station was fair as it did not discriminate against specific demographics or backgrounds. Furthermore, the test was found reliable in terms of the generalizability theory. Candidate performance in the empathy station was not associated with that of any other stations, which indicate this station assessed candidate attributes that were different from other stations. This finding offers evidence for the construct validity of the test.
This study found the male candidates were more satisfied with the video-based case than females, and younger candidates showed more interest in it than their older counterparts. This finding may indicate the digital technology preference among younger males. Yet, there were no difference in candidates’ overall perceptions nor in their station scores across different demographics or backgrounds; thus, such differences seemed not affect the fairness of the test.
This study demonstrates the feasibility of using a video-based case in MMIs to assess candidates’ non-cognitive attributes such as empathy. We believe that video-based cases more effectively assess communication and interpersonal skills than paper format tests because they include verbal and non-verbal cues regarding the natures of interactions. Some medical schools have reported using actors in the MMI station, especially to assess candidates’ communication skills [6, 8]. Still, often budgetary constraints prevent the utilization of such resources, and thus, we would argue using video-based cases provides a cost-effective means of assessing candidates’ non-cognitive attributes.
Study limitations should be acknowledged. First, the video vignette used in this study was designed to assess candidate’s empathic abilities. It is known that individual’s non-cognitive abilities are context-specific [18], which means the results of this study cannot be generalized to enable assessments of candidates’ attributes in other domains. Thus, we recommend additional studies be undertaken to develop video-based MMI stations in various domains and establish their effectiveness. Second, although there are many psychometric measures available to assess one’s empathic abilities, we could not compare the results from such measures with the test scores form our empathy station due to the anonymity of our study participants. Such a study would offer further evidence for the validity of this MMI station. Third, our study does not provide evidence on the predictive validity of the test. Future study is warranted to investigate relationships between candidate performance measured by the MMI using a video-based case and empathetic communication skill performances in clinical settings.
Our findings indicate that the use of a video-based case in the MMI was perceived positively by the candidates and it fairly assessed their empathetic abilities as it did not discriminate against specific demographics or backgrounds. Furthermore, the test was found reliable in terms of the generalizability theory and it assessed candidate attributes different from those assessed in the other stations. The present study illustrates the feasibility of using the video-based case in MMIs to assess candidates’ empathic abilities.
MMI: Multiple Mini-interview
Ethics approval and consent to participate: This study was reviewed by the institutional review board of Dongguk University. Consent was waived by the IRB because our data does not contain any individual person’s private or confidential information.
Consent to publish: Not applicable
Availability of data and materials: Not applicable
Competing interests: The authors have no competing interests.
Funding: This work was supported by the Dongguk University Research Fund (2014).
Authors’ Contributions: BK, KK conceived the study, KK, NL contributed to the design of the study. BK, NL, BK contributed to data analysis and interpretation of the study. KK drafted the manuscript, and all authors read and approved the final manuscript.
Table 1. Descriptive statistics of candidate perceptions of the video-based station (n = 82)*
Category |
Minimum |
Maximum |
Mean (SD) |
Cronbach’s alpha |
Station difficulty |
1.00 |
4.00 |
2.55 (.73) |
.75 |
Authenticity |
1.67 |
5.00 |
3.96 (.56) |
.73 |
Interest |
1.67 |
5.00 |
3.89 (.72) |
.81 |
Overall satisfaction |
2.33 |
5.00 |
4.21 (.58) |
.85 |
* Note: 1 = “strongly disagree” 5 = “strongly agree”
Table 2. Descriptive statistics of candidate perceptions of the patient-doctor relationship presented in the video clip (n = 82)
I felt … |
Mean(SD) * |
The doctor showed empathy in communicating with the patient. |
1.98 (.93) |
The doctor responded appropriately to the patient’s thoughts or feelings. |
1.95 (.91) |
The doctor communicated in ways to make the patient talk openly about her conditions. |
2.33 (1.06) |
The patient felt comfortable talking to the doctor about her thoughts and feelings. |
2.00 (.86) |
Overall, this patient-doctor communication went well. |
2.18 (.90) |
Table 3. Comparisons of the perceptions and performances of candidates with different backgrounds in the empathy station (n = 82)
Item |
p values |
||||||
Gender |
Age |
Undergraduate major |
Geographic Location |
||||
Perceptions of the empathy station |
|||||||
Station difficulty |
.12 |
.08 |
.25 |
.16 |
|||
Authenticity |
.05 |
.27 |
.84 |
.32 |
|||
Interest |
.11 |
.04 |
.70 |
.39 |
|||
Overall satisfaction |
.04 |
.46 |
.87 |
.34 |
|||
Total |
.13 |
.39 |
.82 |
.77 |
|||
Perceptions of the patient-doctor relationship portrayed in the video |
|||||||
The extent to which doctor showed empathy |
.24 |
.24 |
.30 |
.15 |
|||
Doctor responding to the patient’s feelings adequately |
.17 |
.16 |
.15 |
.45 |
|||
Doctor having patient talk openly about her conditions |
.92 |
.31 |
.42 |
.46 |
|||
Patient expressing thoughts and feelings comfortably |
.58 |
.44 |
.16 |
.44 |
|||
Overall assessment of patient-doctor relationship |
.40 |
.20 |
.53 |
.11 |
|||
Total |
.11 |
.37 |
.82 |
.77 |
|||
Station scores |
|||||||
Empathy station |
.52 |
.78 |
.55 |
- |
|||
Total scores (six stations) |
.53 |
.18 |
.05 |
- |
|||
Table 4. Pearson’s r coefficients of the candidate performance in the empathy station with that of other stations (p values)
Station topic |
Self-regulation |
Problem-solving ability I |
Ethics |
Problem-solving ability II |
Logical reasoning |
Empathy |
.167 |
.130 |
.050 |
.100 |
.055 |
Table 5. Summary of effects, estimated variance components, and the G-coefficient analysis (n = 83)
Effect |
Degree of freedom |
Mean Square |
Estimated variance |
Candidate |
82 |
77.36 |
.93 |
Station |
5 |
1,321.93 |
3.60 |
Assessor |
3 |
42.89 |
1.50 |
Candidate * station |
480 |
49.57 |
4.03 |
Candidate * assessor |
246 |
49.21 |
2.70 |
G-coefficient = σ2(candidate) / (σ2(candidate) + σ2(candidate *station) /6) + (σ2 (candidate * assessor) / 12) = 0.83