CME was initiated in the US around 1920–1930 (2) and adopted by the American Academy of General Practice in 1947 (1). The American Medical Association presently defines CME as “educational activities that serve to maintain, develop, or increase the knowledge, skills, performance, and relationships a physician uses to provide services for patients, the public, or the profession” (3).
Even though it is generally accepted, albeit with the low quality of the evidence, that CME appears to be effective at the acquisition and retention of knowledge, attitudes, and skills (4–7), the remaining fundamental question about CME and CME-like activities is whether they improve HCP performance in real-life practice and patient health outcomes. The answer to this question awaits an introduction of objective measures of assessment of closure of gaps in knowledge and improvement in competency of learners.
Direct and objective measurement of learning remains to be an imprecise science with the best estimates derived from pre- and post-activity testing. Thus, the effectiveness of the CME activity has been traditionally evaluated by comparing the performance of a learner on pre- and post-activity exams. This approach is not totally reliable in translating exam answers into better practical performance (5, 6, 7). In fact, it has been frustratingly difficult to measure objectively the relationship between teaching, learning, clinical competence, and patient outcome (8, 19–21). In the absence of these direct and objective data, the analysis frequently relied upon self-reported gains in knowledge and behavior change as a result of participation in the CME activity (9, 10, 22). However, subjective evaluation based on self-reported knowledge gain is even less reliable (23, 24).
Even such frequently used and quoted Moore’s Seven Levels of CME outcome measures and Miller pyramid assessment are based largely on subjective self-reported observations (5, 6, 7, 24). Objective evaluation of improved performance such as chart reviews and direct observations of the learner’s skill in a practice setting is much more cumbersome and rarely if ever employed outside the research setting.
In the past several years, simulation and other interactive techniques have been introduced into the CME world (11–14). These techniques not only offer an effective approach to learning, but they can also improve the assessment of the effectiveness of digital learning.
VPS is defined as an interactive computer or mobile application simulation of real-life scenarios applied for the training of health professionals. Simulation allows learners to practice skills and improve their critical thinking and level of competency without any risk to a patient (25). VPS has been successfully employed in teaching medical and nursing students, residents, and practicing physicians (26–28).
In 2019 Kononowicz and colleagues (13) have performed a systematic review of the effectiveness of VPS in professional education following the Cochrane methodology, searching databases from 1990 to September 2018. They analyzed a total of 51 trials with 4696 participants comparing the effectiveness of VPS to traditional education. The authors found similar results for knowledge and favored the VPS approach for skills. They concluded that VPS can be more effective in improving skills and at least as effective as traditional education in improving knowledge.
Herein we report the constructive and effective utilization of a new generation VPS platform to measure and compare objectively a) a multitude of learning and performance parameters; b) performance of an individual learner relative to their peers; c) performance of an individual learner relative to a given group of learners; d) relative performance of one group of learners to another; e) improvement in knowledge/competency of individual learners or groups of learners; and f) durability of learning.
In addition, the analysis of the VPS performance reveals prescription patterns of individual learners and those of groups of learners allowing an assessment of their familiarity with the current guidelines and the array of medications.
The analysis brings to light not only the selection of medications but also the timing of drug initiation and termination, as well as the patterns of combination therapy. Analysis of learners’ performance in selecting medications also determines their knowledge of and adherence to national guidelines and local formularies as appropriate. Furthermore, employing VPS methodology allows to demonstrate how familiar the learners are with the side effects of various medications and their level of knowledge as to how to prevent or deal with these side effects. Finally, VPS analysis allows the provider of the activity to define objectively learners’ prescribing experience with potential prescription errors (29).
Traditional pre- and post-activity testing may still be employed along with the VPS-based educational activity, but in these cases, the data from the latter drive an objective assessment of what the learners have done while addressing a variety of clinical scenarios.
We believe the success of the described VPS system owes much to its game-like design. Medical education is changing at a fast pace. Both graduate and post-graduate medical education incorporate technology-enhanced active learning and multimedia education tools into their curriculum. Gamified training platforms include educational games, mobile medical apps, and virtual patient scenarios. VPS has been consistently shown to enhance learning results in general. Gamification has the potential to further improve learning, engagement, and cooperation (30). Especially, a combination of VPS and gamification may help with promoting risk-free healthcare decision-making, remote learning, learning analytics, and quick feedback.
The limitations of the VPS-based approach are related primarily to the use of technology. First, developing VPS cases requires specific technical knowledge to translate a “written” case into a virtual simulation with diverse pathways based on self-propelled simulation of disease progression and the effects of treatment.
Second, learners with a familiarity with software (practice with previous VPS or games) have an advantage as this familiarity prevents computer-based errors.
Third, future quality assurance studies are still needed to evaluate whether closures of “gaps in knowledge”, “increase in competence” and “performance improvement” relate to the real-life practice of these learners. As VPS, particularly in combination with gamification, competes and replaces the traditional CME approach, such studies will inevitably be performed.