Neither the students nor the junior doctors appeared to know the “must know” questions. Is this poor preparation or unrealistic expectations?
Frontline non-academic paediatric clinicians were asked to provide clinical questions based around essential knowledge for practice. Despite the instruction to the question providers that MAC questions should reflect ‘must know’, ‘basic knowledge’ and a ‘minimum accepted competency’, this exercise yielded a relatively low passing score and reflected the difficulty of the standard of questions being asked. Each individual submitting questions would have described the set standard for their own questions as 100% (i.e. “must know”) but it is possible that this was an unrealistic expectation for undergraduate students. Contributors were not asked to assess the standard of other submitted questions. This would have been a useful exercise but it was beyond the scope of this study to recruit and train non-academic clinicians in standard setting. When considered in the context of “must know” information, the average score of 45–46% for undergraduate students was considerably less than would be expected by clinicians working at the ‘frontline’ of general paediatrics. This may reflect unrealistic expectations, or a curricular emphasis on alternative content.
Reassuringly, the paediatric ‘junior’ doctors about to embark on their paediatric career performed significantly better than the medical students. This is an important finding as the MAC examination was designed as a test of knowledge required for ‘on the ground’ clinical practice. These participants appeared to have benefitted from a year of clinical practice, albeit not in paediatrics. However their results still did not match the "must know" standard initially expected by the clinical paediatricians setting the questions.
Why was the MAC examination result standard set so low if the questions were meant to be ‘must know' ‘basic' knowledge?
This reflects a difference in opinion of expected standards between faculty for undergraduate students and that of non-academic clinicians for junior doctors in paediatrics. With the latter seemingly expecting a higher level of knowledge. However, perhaps rather than a ‘higher level’ of expected knowledge, non-academic clinicians expected a different type of knowledge. It is possible that an undergraduate focus on traditional ‘textbook' facts did not align with the clinicians’ focus on practical aspects of the job, which are particularly relevant to everyday clinical practice. This potential difference in knowledge or focus warrants further investigation at undergraduate level and possibly intervention at early postgraduate level for those planning to practice in paediatrics. There is a move in some third level institutions to revisit the structure of their undergraduate teaching to increase focus on clinical practice and the broader non-clinical skills required by the physicians (6).
All of the universities within the island of Ireland have recently collaborated to develop a national undergraduate paediatric curriculum. This will go some way to standardising the knowledge acquired by graduates working in Ireland and is a great opportunity to revisit how undergraduate programs are taught. This process should incorporate the views of a wide range of ‘non-academic’ paediatric clinicians to ensure that it can bridge the gap between what is taught and assessed at undergraduate level and what is practically important in the work place. This study highlights the difficulty in attempting to deliver an undergraduate course that both establishes a core of basic paediatric knowledge and prepares a student for the postgraduate clinical environment. However undergraduate medical education is not merely about transferring knowledge to future medical practitioners. It is also about developing transferrable general clinical and non-clinical skills required for good medical practice, including Human Factors, and engendering the skills for lifelong self-directed learning. It may be that bridging this ‘gap’ is not necessarily the responsibility of the university that is preparing graduates to work as general physicians rather than subspecialists, but rather the postgraduate training bodies should possibly be identifying ways in which this type of knowledge is provided and assessed prior to entering the training scheme. This could be delivered in a short induction course and the transitional period of assistantship that many universities now have in place would seem a suitable time to do this. It is anticipated that the results of this study can inform the content of transition interventions to better prepare them for practice.
Within results: Did the students from year to year perform differently?
RCSI students have two paediatric examinations that contribute to their final marks. The first is a clinical examination done immediately after the six-week paediatric rotation, when students are fresh from their paediatric clinical experience. The second is a written MCQ and given to all students at the end of the academic year, when students have been focusing on knowledge acquisition. There was no significant difference between the results obtained in the MAC examination between either year of RCSI students, despite the fact that one year had the assessment at the end of their paediatric rotation and the other at the end of the academic year. In addition, the fact that two large groups of students obtained such similar results in the exam suggests that this examination is reproducible from year to year.
The junior doctors, with their increased clinical experience, performed significantly better than the students. This may reflect the clinical emphasis of the questions or possibly that junior doctors specialising in paediatrics were likely to be more interested in the subject and so would be expected to do better, irrespective of when they were assessed.
Did students perform differently in their official RCSI end of year examinations compared with how they performed in the MAC examination?
Individual students’ performance in the MAC examination was compared with their performance in the official RCSI university paediatric examinations. A student’s rank within the class was calculated for each examination and compared to their rank in the other examination. This allowed determination of whether an individual’s performance on one type of examination (MAC or official RCSI examinations) was consistent, or whether they performed differently, relative to their peers, on different examinations. A statistically significant positive correlation between an individual’s MAC score and their score from official RCSI paediatric final assessments was found.
Did students from a different academic institution perform in a similar way compared with final results?
In total, 54 QUB students sat the MAC examination. There was a statistically significant positive correlation (Spearman’s r = 0.30 [p = 0.029]) between QUB students ranking on the MAC examinations and their ranked performance on the paediatric aspect of their official summative university paediatric written examination. This was similar to the correlation between the RCSI students MAC examination results and their paediatric examination results (r = 0.44 [p < 0.01]).
Overall while the gross scores themselves may have been different for undergraduates taking both the MAC and official university exams, both assessments ranked individuals in a similar way. This is reassuring, as exam results are often used as criteria for shortlisting and appointing junior doctors to training schemes and stand-alone posts.
Study limitations
There were 15 consultant clinicians providing 71 questions for the MAC examination. It is possible that there would have been even greater breadth to and diversity to the questions if there had been a greater number of paediatricians from all regions of the country contributing questions.
The results of this study may have been influenced by the fact that it relied on volunteers to provide questions. Therefore, these consultants have self-selected to a certain degree, and our sample may not accurately reflect the opinion of the ‘average’ paediatric clinician. However their contribution is extremely valuable, as these individuals were sufficiently motivated to contribute to this work.
Both the undergraduate students and junior doctors who sat the exam did so voluntarily, and so the results may reflect a more motivated population than the cohort overall. In the junior doctor cohort, the 93% response rate makes it unlikely that this would have an important effect. In the undergraduate cohort, the proportion of possible candidates volunteering for the exam was lower so the chances of selection bias are greater. However, there was a significant positive correlation between their MAC results and their official university results. As these rankings did not merely cluster at the top of the class, it is clear that it was not just the highest achieving students who had volunteered to do the exam.