Medical education has evolved from a focus on the process of education to a focus on outcomes and demonstration of competence. This shift is, in part, founded on the work of Stuart E. Dreyfus and Hubert L. Dreyfus. They developed a model of skill acquisition through formal instruction and practice1. The Dreyfus model proposes that a student passes through five distinct stages: novice, competence, proficiency, expertise, and mastery. Modeling growth as changes in developmental stages has proven to be useful in many fields. Examples include Piaget’s stages of cognitive development2, Kohlberg’s stages of moral development3, stage-sequential models for reading development4 and paired associate learning5.
Progress testing assesses learner growth over time through the administration of examinations of similar content and difficulty across the curriculum. Tests are anchored to end-curricular expectations. Progress testing can be used for formative feedback or summative decision making. Progress tests can complement other summative assessments that are given at endpoints of training, such as the United States Medical Licensing Examination (USMLE) Step 1 and Step 2 Clinical Knowledge examinations administered nationally to medical students.
In 2016 the Michigan State University College of Human Medicine adopted an innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) and Customized Assessment Services (CAS) tests for progress testing twice per semester for the five semesters of the pre-clerkship curriculum. Minimum expectations for examination performance are established for each semester. The examinations contribute to students’ grades and inform decisions about progression within the curriculum. Students must meet all curricular expectations, including performance on these progress tests, before sitting for the United States Medical Licensing Examination (USMLE) Step 1 and entering clerkships.
Methodological issues may limit the generalizability of progress tests to larger scale contexts, and their ability to predict future performance. For example, previous studies correlated scores on each iteration of a progress test with USMLE Step1 results independently6,7 and found that later progress tests’ scores were highly correlated with Step 1 performance. In these studies, the growth paths of performance on the progress tests were ignored. Another branch of studies modelled the growth of medical knowledge using progress tests8, but the growth of medical knowledge was not used to predict USMLE Step 1 results. Thus, it was unclear to medical educators how to best use the tests to confirm the effectiveness of the curriculum and predict student performance on the USMLE Step 1.
In this study we employed Markov chain methodology9,10 to evaluate medical students’ dynamic trajectories on NBME CBSE and CAS examinations given as progress tests to predict their USMLE Step 1 performance. We selected this method because, in contrast to traditional ANOVA models, the Markov chain model considers the correlation of the previous state to the next one, naturally generating each student’s growth pattern based on estimated steady-states. This is in contrast to Growth Mixture Modeling (GMM)11, another approach to modeling growth over time, which estimates subgroup, not individual, growth patterns. These individual growth patterns, in turn, can be used to predict Step 1 performance parametrically.
The Markov chain approach to assessing growth in medical knowledge can be described as moving through several different states of knowledge as proposed by Dreyfus model. In the beginning, students have limited knowledge of medicine despite completing prerequisite science courses, and hence their performance is expected to be well below expectations for passing USLME Step 1. This can be modeled by means of a Novice state, in which the probability of providing a correct answer is low. At the end of a course of study, students have attained a depth of medical knowledge, and hence having a very high probability of passing USMLE Step 1, which is called Competent state. Depending on their learning strategies, students may pass through several intermediate states, dubbed Advanced Beginner states, in which they have a growing but incomplete medical knowledge base. The number of latent states and the thresholds of each latent state can be estimated by the Latent Markov model12. The transition dynamics from Novice state to Advanced Beginner states and to Competent state of each student can be used to predict USMLE Step 1 performance. We hypothesized that students with higher transition probabilities to the Competent state would have better performance on Step 1. This study has three aims: (1) to identify the latent stages medical students go through in the first two years of medical school using progressive tests results, (2) to identify students’ transition probabilities among different stages, and (3) to predict USMLE Step 1 results based on their transition probabilities. Findings obtained from our study could be valuable for individual medical student diagnostic and remediation purposes.