Preparedness for practice: Is there a disparity between what undergraduate medical students are taught and what is felt to be essential knowledge in the postgraduate paediatric domain?

Background This study aimed to evaluate how undergraduate students and junior paediatric doctors performed against an examination of knowledge set by non-academic consultants at a “must –know “ level for starting in paediatrics. We named this the Minimum Accepeted Competency (MAC) examination. Methods Examination (comprised of 30 MCQ questions) was delivered to undergraduates also to new junior paediatric doctors. Results of this examination ascertained if participants had reached a clinician determined MAC, were compared with ocial university examination results and compared students to junior doctor performance.


Abstract Background
This study aimed to evaluate how undergraduate students and junior paediatric doctors performed against an examination of knowledge set by non-academic consultants at a "must -know " level for starting in paediatrics. We named this the Minimum Accepeted Competency (MAC) examination.

Methods
Examination (comprised of 30 MCQ questions) was delivered to undergraduates also to new junior paediatric doctors. Results of this examination ascertained if participants had reached a clinician determined MAC, were compared with o cial university examination results and compared students to junior doctor performance.

Results
Total of 478 participants. Mean MAC score was 45.9% for students and 64.2% for doctors (signi cantly higher [p < 0.01]). A signi cantly reduced number of students passed the MAC compared with their o cial university examinations 68% V 97%). A Spearman's rank co-e cient showed a moderate but statistically signi cant positive correlation between students results in their o cial university examinations and their score in the MAC examination.

Conclusion
This work demonstrates a disparity between student and junior doctor levels of knowledge with consultant expectations from an exam based on what front-line paediatricians determined to as "mustknow" standards. This study demonstrates the importance of involvement of end-users and future supervisors in undergraduate teaching.

Background
Every year a fresh group of medical graduates start work for the very rst time, and become responsible for clinical decision-making and the treatment of patients. Senior medical staff provide guidance and supervision, but newly quali ed doctors are expected to carry out aspects of the job independently by virtue of their undergraduate medical training. Despite this, it is recognised that early-career junior doctors identify a number of gaps between what they were taught during these undergraduate years and their clinical work as a doctor (1). A General Medical Council (GMC) report exploring the extent to which UK (United Kingdom) medical graduates are prepared for practice' recognised that newly quali ed doctors feel unprepared in many areas of their daily practice and recommended transition interventions, such as assistantships or work-shadowing, to address this(2).
Many undergraduate curricula and assessment strategies are designed by academic doctors employed by universities, and there is little evidence of input from 'non-faculty' clinicians. (3). However, after qualifying it is often the 'non-faculty' clinicians supervising them, who set the standard of what is expected in their clinical practice. While clinical knowledge and skills are not the only desired outcomes of an undergraduate program, they remain core to most courses.
At undergraduate level, contributions from non-academic clinicians are often informal. This contrasts with the postgraduate exam approach that actively encourages and seeks out non-academic clinician input (4). There is a paucity of published literature on what level of knowledge is expected of new trainees by clinical consultants working in frontline paediatrics. While undergraduate and postgraduate training curricula are explicit, there is no clear roadmap or speci c clinical guidance documenting what is expected of the trainee as they start in paediatrics, other than an extrapolation from an undergraduate university assessment-which may re ect a more general graduate requirement. This study aims to evaluate how undergraduate students perform against an examination of knowledge set solely by nonacademic clinicians at a level that they deemed was "must -know", i.e. the basic level of knowledge they would expect from a junior doctor starting in their service and in paediatrics for the rst time.
We created a multiple choice question (MCQ) examination to sample areas of paediatric knowledge, designed to determine whether undergraduate learning prepared the future graduand to meet the expectations of their potential future supervisors. This was named the 'Minimum Accepted Competency (MAC)' examination. Both students' results and class rank in this exam were compared with their undergraduate performances in the o cial university nal paediatric assessment. The same exam was also given to junior doctors who had just started basic specialist training in paediatrics.

Methods
This was a prospective cohort study of undergraduate students from two large medical schools in Ireland who had completed their paediatric rotations, and of paediatric junior doctors on the basic specialisttraining (BST) scheme (overseen by the Royal College of Paediatrics in Ireland (RCPI)). Undergraduates were assessed before graduation and BST trainees within two months of starting their post in paediatrics. Ethical approval for the project was obtained from the Royal College of Surgeons in Ireland Consultant Paediatricians working in clinical practice were contacted via the RCPI mailing list and asked to provide exam questions for use in this examination. They were asked to generate questions based on "must know" information that, in their opinion, was necessary for any junior doctor starting their rst post in paediatrics. Each clinician was asked to submit examination questions in 'multiple choice question' (MCQ) or 'true/false' format, at a di culty level they considered to be at a minimally acceptable level of paediatric knowledge for a junior doctor to start working within the specialty. An academic trained in assessment and MCQ writing, who was not directly involved in the study, reviewed the questions. They were reviewed for clarity and language, however neither content nor di culty level were changed. A bank of questions was created and a random number generator was then used to choose 30 questions to form the research examination (MAC) paper. The questions were then further assessed by the undergraduate academic paediatric faculty of the RCSI at the annual departmental examination meeting, where the nal paediatric undergraduate examination paper is standard set using a modi ed Angoff technique (4).
Angoff is used to set a standard for the paper and generate a 'passing score'. Academic staff participating in the standard setting were blinded to whether questions formed part of the o cial university written examination or comprised part of this research study. The same 30-item MAC examination was delivered to both qualifying undergraduates (to assess performance from undergraduate learning) and new junior doctor recruits (to assess preparedness for the clinical role).
Undergraduate results of the MAC examination were compared with student university nal results in paediatrics to determine how research examination scores correlated with standard undergraduate assessments in current use.
Both undergraduate and postgraduates were approached for consent to participate in the study and do the MAC exam. Undergraduates from RCSI and QUB were recruited. Undergraduates were approached following completion of their paediatric rotation, which occurred in the penultimate undergraduate year. RCSI students were recruited over two years. In the rst year, consenting students took the MAC exam at the end of the academic year, at the same time as their nal paediatric paper. In the second year, students took the MAC exam immediately following their six-week clinical paediatric rotation. The QUB cohort also took the MAC examination at the end of their six-week clinical paediatric rotation.
For the BST cohort, paediatric junior doctors were approached for consent to join the study during the rst paediatric BST training day of the new academic year in October 2016 and undertook the examination that day.
Each of the examinations took place under standard examination conditions and was invigilated by the study investigator. Consent was obtained and paper examination sheets were distributed to each participant. These were collected and marked at the end of the examination. MAC examination papers were destroyed once the mark had been transferred to the research database.
For RCSI undergraduates, MAC examination results were compared with numerical student scores from their o cial university end of year paediatric written examination. For all undergraduate participants (RCSI and QUB), class rank in the MAC was compared with class rank in the nal paediatric exams.
Results were analysed using SPSS version 24.0 (5). Overall examination results were reported as the mean and standard deviation, with the proportion of students achieving the standard set passing score described. Non-parametric data were described using median (range) or median (interquartile range). Normally distributed group data were analysed using student t-tests. A p-value < 0.05 represented statistical signi cance.
The results of the MAC examination were analysed to determine if participants had reached a cliniciandetermined minimum accepted competency. Spearman's rank correlation was performed to determine any difference between student's performance in the MAC examination and their performance in their o cial RCSI paediatric examinations.
For the postgraduate arm of the study, results were analysed to determine if there was a difference in the performance between the paediatric junior doctors and the undergraduate students.
Institutional ethical approval did not allow for a direct comparison of the individual results of QUB and RCSI students. However, for the purpose of investigating consistency in the performance of students across two different institutions, we calculated a correlation between QUB students rank in the MAC examination with their rank in their o cial QUB paediatric examination and compared this with how RCSI students correlated between rank in MAC examination and rank in their o cial RCSI paediatric examination.

Results
Non-academic clinicians registered with the RCPI (Paediatric division) were contacted by e-mail. The email request was delivered to 238 out of 247 (96%) members of RCPI. A total of 76 questions (5 duplicates) were contributed by 15 consultants. The questions were formatted in a 'best of 5' MCQ structure to match the question format in use at the time for paediatric undergraduates at RCSI. A random number generator was used (randomly selecting numbers between 1 and 71) and the rst 30 selected were used as the MAC examination.
The questions on the MAC examination were from a diverse selection of sub-specialty and general paediatricians. The question therefore tested a wide range of common and clinically important areas within paediatrics including; seizures, lower respiratory tract infections, growth and emergencies.
Using a modi ed Angoff technique in a blinded setting, 9 members of the RCSI faculty calculated a passing score of 41.2%, equating to a passing score of 13/30 on the MAC examination.

Discussion
Neither the students nor the junior doctors appeared to know the "must know" questions. Is this poor preparation or unrealistic expectations?
Frontline non-academic paediatric clinicians were asked to provide clinical questions based around essential knowledge for practice. Despite the instruction to the question providers that MAC questions should re ect 'must know', 'basic knowledge' and a 'minimum accepted competency', this exercise yielded a relatively low passing score and re ected the di culty of the standard of questions being asked. Each individual submitting questions would have described the set standard for their own questions as 100% (i.e. "must know") but it is possible that this was an unrealistic expectation for undergraduate students. Contributors were not asked to assess the standard of other submitted questions. This would have been a useful exercise but it was beyond the scope of this study to recruit and train non-academic clinicians in standard setting. When considered in the context of "must know" information, the average score of 45-46% for undergraduate students was considerably less than would be expected by clinicians working at the 'frontline' of general paediatrics. This may re ect unrealistic expectations, or a curricular emphasis on alternative content.
Reassuringly, the paediatric 'junior' doctors about to embark on their paediatric career performed signi cantly better than the medical students. This is an important nding as the MAC examination was designed as a test of knowledge required for 'on the ground' clinical practice. These participants appeared to have bene tted from a year of clinical practice, albeit not in paediatrics. However their results still did not match the "must know" standard initially expected by the clinical paediatricians setting the questions.
Why was the MAC examination result standard set so low if the questions were meant to be 'must know' 'basic' knowledge?
This re ects a difference in opinion of expected standards between faculty for undergraduate students and that of non-academic clinicians for junior doctors in paediatrics. With the latter seemingly expecting a higher level of knowledge. However, perhaps rather than a 'higher level' of expected knowledge, nonacademic clinicians expected a different type of knowledge. It is possible that an undergraduate focus on traditional 'textbook' facts did not align with the clinicians' focus on practical aspects of the job, which are particularly relevant to everyday clinical practice. This potential difference in knowledge or focus warrants further investigation at undergraduate level and possibly intervention at early postgraduate level for those planning to practice in paediatrics. There is a move in some third level institutions to revisit the structure of their undergraduate teaching to increase focus on clinical practice and the broader nonclinical skills required by the physicians (6).
All of the universities within the island of Ireland have recently collaborated to develop a national undergraduate paediatric curriculum. This will go some way to standardising the knowledge acquired by graduates working in Ireland and is a great opportunity to revisit how undergraduate programs are taught. This process should incorporate the views of a wide range of 'non-academic' paediatric clinicians to ensure that it can bridge the gap between what is taught and assessed at undergraduate level and what is practically important in the work place. This study highlights the di culty in attempting to deliver an undergraduate course that both establishes a core of basic paediatric knowledge and prepares a student for the postgraduate clinical environment. However undergraduate medical education is not merely about transferring knowledge to future medical practitioners. It is also about developing transferrable general clinical and non-clinical skills required for good medical practice, including Human Factors, and engendering the skills for lifelong self-directed learning. It may be that bridging this 'gap' is not necessarily the responsibility of the university that is preparing graduates to work as general physicians rather than subspecialists, but rather the postgraduate training bodies should possibly be identifying ways in which this type of knowledge is provided and assessed prior to entering the training scheme. This could be delivered in a short induction course and the transitional period of assistantship that many universities now have in place would seem a suitable time to do this. It is anticipated that the results of this study can inform the content of transition interventions to better prepare them for practice.
Within results: Did the students from year to year perform differently?
RCSI students have two paediatric examinations that contribute to their nal marks. The rst is a clinical examination done immediately after the six-week paediatric rotation, when students are fresh from their paediatric clinical experience. The second is a written MCQ and given to all students at the end of the academic year, when students have been focusing on knowledge acquisition. There was no signi cant difference between the results obtained in the MAC examination between either year of RCSI students, despite the fact that one year had the assessment at the end of their paediatric rotation and the other at the end of the academic year. In addition, the fact that two large groups of students obtained such similar results in the exam suggests that this examination is reproducible from year to year.
The junior doctors, with their increased clinical experience, performed signi cantly better than the students. This may re ect the clinical emphasis of the questions or possibly that junior doctors specialising in paediatrics were likely to be more interested in the subject and so would be expected to do better, irrespective of when they were assessed.
Did students perform differently in their o cial RCSI end of year examinations compared with how they performed in the MAC examination?
Individual students' performance in the MAC examination was compared with their performance in the o cial RCSI university paediatric examinations. A student's rank within the class was calculated for each examination and compared to their rank in the other examination. This allowed determination of whether an individual's performance on one type of examination (MAC or o cial RCSI examinations) was consistent, or whether they performed differently, relative to their peers, on different examinations. A statistically signi cant positive correlation between an individual's MAC score and their score from o cial RCSI paediatric nal assessments was found.
Did students from a different academic institution perform in a similar way compared with nal results?
In total, 54 QUB students sat the MAC examination. There was a statistically signi cant positive correlation (Spearman's r = 0.30 [p = 0.029]) between QUB students ranking on the MAC examinations and their ranked performance on the paediatric aspect of their o cial summative university paediatric written examination. This was similar to the correlation between the RCSI students MAC examination results and their paediatric examination results (r = 0.44 [p < 0.01]).
Overall while the gross scores themselves may have been different for undergraduates taking both the MAC and o cial university exams, both assessments ranked individuals in a similar way. This is reassuring, as exam results are often used as criteria for shortlisting and appointing junior doctors to training schemes and stand-alone posts.

Study limitations
There were 15 consultant clinicians providing 71 questions for the MAC examination. It is possible that there would have been even greater breadth to and diversity to the questions if there had been a greater number of paediatricians from all regions of the country contributing questions.
The results of this study may have been in uenced by the fact that it relied on volunteers to provide questions. Therefore, these consultants have self-selected to a certain degree, and our sample may not accurately re ect the opinion of the 'average' paediatric clinician. However their contribution is extremely valuable, as these individuals were su ciently motivated to contribute to this work.
Both the undergraduate students and junior doctors who sat the exam did so voluntarily, and so the results may re ect a more motivated population than the cohort overall. In the junior doctor cohort, the 93% response rate makes it unlikely that this would have an important effect. In the undergraduate cohort, the proportion of possible candidates volunteering for the exam was lower so the chances of selection bias are greater. However, there was a signi cant positive correlation between their MAC results and their o cial university results. As these rankings did not merely cluster at the top of the class, it is clear that it was not just the highest achieving students who had volunteered to do the exam.

Conclusion
This study suggests there is a knowledge disparity between what is taught and assessed in the undergraduate domain and what is expected as essential knowledge in the postgraduate domain. Increasing co-operation between academic and experienced non-academic clinicians should help to bridge this gap. Transition interventions such as assistantships and workshadowing would seem to provide a platform for this. It is anticipated that studies such as this will help inform the content of such interventions to ensure that future junior paediatric doctors are optimally prepared for practice.