In total, 894 students took the proficiency-test in the course of three years (2016-2018). Out of these 894 students, 45 were excluded either because they had dropped out without continuing into the Advanced Master’s, or did not complete the test. In 2016, 323 medical students took the proficiency-test, while in 2017 and in 2018, there were 305 and 266 candidates respectively. The scores were normally distributed with a mean score in 2016 of approximately 66.92%, and a standard deviation of 7.49%; in 2017, the mean score was 69.23%, and the standard deviation was 4.92%; in 2018, the mean score was 66.84% and the standard deviation was 4.87% (see Table 1).
Table 1 Descriptive statistics of total scores of the proficiency-test for the 2016-2018 period.
|
N
|
Minimum
|
Maximum
|
Mean
|
Standard Deviation
|
Total scores 2016
|
323
|
36.70%
|
80.47%
|
66.96%
|
7.49%
|
Total scores 2017
|
305
|
39.37%
|
79.74%
|
69.23%
|
4.92%
|
Total scores 2018
|
266
|
47.50%
|
78.42%
|
66.85%
|
4.87%
|
Using thematic analysis, five recurrent labels could be discerned in the qualitative data considering both cognitive and non-cognitive skills. First label was ‘conflict with trainer’; this refers to conflicts arising between trainee and trainer (cultural differences, different expectations, attitude, etc.) Second label was ‘problems with learning trajectory’; the mentors labelled some students as not consequent with self-study and attending seminars. Third recurring label was ‘personal problems’ referring to trainee’s psychological issues, learning difficulties (ADHD, autism, etc.), and problems in trainees’ private life that might influence their performance. Fourth label was ‘NS in other tests’ and it refers to students that passed the proficiency-test, but they failed other curriculum assessments. Last label was ‘more than one’ signaling students with multiple problems. In total, 237 students were labelled. Figure 1 illustrates the number of students with a label and without one per score quartile, while Figure 2 provides an overview of students’ distribution per label within score quartile.
In 2016, quartile 1 included 80 students out of 323 participants. Out of 80 students, 28 were labelled. More specifically, three students were labelled as ‘conflict with trainer’ and three students as ‘personal problems’; twelve students had failed another test while four students were reported to have ‘more than one’ problems. Quartile 4 included 79 students and twelve out of 79 were labelled. That is, two students had a ‘conflict with their trainer’; four students were experiencing ‘problems with their learning trajectory’; four students had failed in other tests, and two students had multiple problems.
In 2017, 76 students out of 305 scored in quartile 1, and 76 also scored in quartile 4. In quartile 1, 35 students received a label. More accurately, two students were labelled with ‘personal problems’, five students had a ‘conflict with their trainer’, while eight students were not consequent with deadlines; twelve students had not succeeded in other assessing activities, and eight students were falling under ‘more than one’ category. In quartile 4, eight students were labelled. Three students had ‘problems with their learning trajectory’, and one student had ‘conflicts with their trainer’; two students had failed other curricular exams, and two students were experiencing different problems.
In 2018, the number of students in quartile 1 and in quartile 4 was 66 out of 266 respectively. In quartile 1, 43 students were labelled. Specifically, one student had a ‘conflict with their trainer’, and two students had ‘personal problems’; seven students had failed other curriculum tests, while twenty-four students were having ‘difficulties with their learning trajectory’; nine students faced multiple problems. In quartile 4, twelve students were marked. Two students had ‘conflict with their trainer’, four students were having ‘difficulties with their learning trajectory’, while four others had failed other assessments; two students were experiencing different problems at the same time.
For every year the proficiency-test took place, different chi-square tests were performed. Significant results were found for every test year (see Table 2). More specifically, in 2016, there was a significant association between total score quartiles and whether students were labelled χ2 (1, N=159)= 8.28, p< 0.004 (see Table 2). The odds ratio showed that the odds of students being labelled was almost 3 times higher if they had obtained a low total score (see Table 3). The percentage of students that were labelled also significantly differ by score quartile in 2017, χ2 (1, N=159)=23.64 p< 0.001 (see Table 2). The odds of students being labelled was 7.25 higher, if they were ranked in quartile 1 (see Table 3). The relation between score quartiles and whether students were labelled was significant in 2018 as well, χ2 (1, N=132)= 29.95 p<0.001 (see Table 2). The odds ratio showed that the odds of students being labelled was 8.413 higher if they belonged to score quartile 1 (see Table 3).
Table 2 Chi-square tests per test year for score quartiles and labels
Chi-square tests 2016-2018
|
|
Value
|
df
|
Asymptotic Significance (2-sided)
|
Pearson Chi-Square 2016
|
8.285a
|
1
|
0.004
|
Pearson Chi-Square 2017
|
23.642a
|
1
|
0.000
|
Pearson Chi-Square 2018
|
29.953a
|
1
|
0.000
|
Table 3 Effect estimate of students ranking in quartile 1 and receiving a label per test year
Risk Estimate
|
|
|
95% Confidence Interval
|
|
Value
|
Lower
|
Upper
|
Odds Ratio for Score Quartiles Test Year 2016 (Quartile 1/ Quartile 4)
|
3.006
|
1.396
|
6.475
|
Odds Ratio for Score Quartiles Test Year 2017 (Quartile 1/ Quartile 4)
|
7.256
|
3.070
|
17.153
|
Odds Ratio for Score Quartiles Test Year 2018 (Quartile 1/ Quartile 4)
|
8.413
|
3.762
|
18.813
|