Background: Multiple-choice questions (MCQs) are used in measuring the student’s progress and post-examination analysis is usually done to guarantee the item’s appropriateness for question banking. Item analysis is a simple and effective method to determine the quality of MCQs by using three parameters; difficulty or passing index (PI), discrimination index (DI) and distractor efficiency (DE).
Methods: This study analysed the MCQs in the preclinical and clinical examinations in Doctor of Medicine program of Universiti Putra Malaysia. Forty MCQs consisted of four options each in the preclinical examination and 80 MCQs with five options from the clinical examination paper in 2017 and 2018 were analysed and compared.
Results: The mean DI was similar in all examinations, except a significant reduction in 2018 clinical examination. From 2017 and 2018, preclinical MCQs showed an increment in the number of ‘excellent’ and ‘good’ items. However, clinical papers showed reduction in DI due to high number of ‘poor’ questions. Comparing both years, there was an increase in the number of items with no non-functioning distractors in both examinations. Among all, preclinical MCQs in 2018 showed the highest mean of DE as compared to the others.
Conclusion: Our findings suggested that item authors from preclinical phase showed an improvement in constructing good quality MCQs while clinical phase authors may need more training and continuous feedback. Higher number of options did not affect the level of difficulty of a question however the discrimination power and effectiveness of distractors might differ.