Our study explored the original and unanswered question of students ‘perceptions of validation and assessment with simulation and it underlined some issues regarding the conditions and development of simulation-based assessment, with focus on the benefit for their hospital clerkships and on the advantage of a high-quality assessment to highlight the practical skills throughout their curriculum. However in some conditions they also found it was a biased and flawed assessment system that didn’t prepare them for the high stakes assessments of their curriculum. They described a change in the way they prepared for and approached the simulation based courses. Following the grounded theory methodology, we can formulate some hypothesis after analyzing students’ simulation-based assessment perceptions.
Students stressed out the teaching and assessment requirements within their clinical rotations although it wasn’t mentioned in our interviews. The subject was always discussed in comparison and opposition to traditional teaching and assessing in medical school. First they found a real benefit from being certified with simulation-based assessment. Not only because it helped them with unusual assessed competencies but also because it helped them with self-confidence when facing hospital teams and also re-assured them to manage new clinical situations. It highlighted an important fact: medical students evolve in a dual system, between university and hospital. They consider university as a uniformed system, unbiased for teaching and assessment in opposition to hospital clerkships where a large heterogeneity in the teaching and assessing methods is reported. We can understand this configuration thanks to the Engerström activity theory (24). Two systems have the same objects (medical education), but not the same final objective nor the same rules or work division to work with. The main objectives of hospitals are the patient outcomes although medicine faculty‘s objectives are MS learning and certification. A contradiction exists between the two systems, because of these different objectives. SBT and assessment could create the link between the two of them as suggested by Berragan et al (25).
Another change induced by simulation-based assessment was the different way to prepare for the whole simulation-based courses. The willingness to endorse the simulation-based course because of the final assessment was new and could help them change their learning practices. With this assessment, students knew the aim of the courses, they could prepare for it and they had certain autonomy to do so. Autonomy, motivation and control in learning are factors who provide students self-regulated learning and encourage them to be actors of their learning (26). They have intrinsic motivation to succeed, and this is associated with deepen learning, and increased control of own students’ outcomes, which could decreased feelings of distress (27).
When the students identified simulation-based assessment as an unfair assessment, they mentioned the subjectivity and lack of authenticity of the exercise. However, subjectivity is one of the inherent pitfalls of a competency-based assessment (28). The nature of a competency is a multicomponent object, with exteriorized and measurable one (the performance) but also with hidden component like mobilization of internal resources or clinical reasoning. Our hypothesis is that medical students rejected the subjectivity because it is not aligned with their “students’ culture” (29). Indeed, for the past decades, a quality assessment was defined as quantitative and objective. The challenges for the faculty is to understand this subjectivity and to deal with it, in order to make new frameworks of assessment, different from MCQ (3). To accept that subjectivity is a part of the competencies will help educators and students to adopt quality’s assessments, like multimodal ones with several tools and situations in a whole programmatic assessment (30). For that, simulation has a great role to play, because it employs controlled environments, reproducible, reliable, with a part of subjectivity that can be controlled.
Discussions on the practical aspects of the simulation-based assessment highlighted two major characteristics of such an assessment: feedback and rating. Normative assessment usually doesn’t provide feedback, only grades and classifications. Simulation-based assessment could provide it all, and give students insurance in their skills. Grades were not completely approved by our students. They preferred knowing they had the skills instead of being classified (31). Some authors also found these results, with a specific element: the more they progressed in the curriculum, the more they felt demotivation for graduations (32). Grades engage extrinsic motivation, which is linked to short term memory, surface learning, unlike intrinsic motivation which helps self-development, satisfaction of the accomplished task and increased efficacy (27). One other suggestion of our results is the ethical issue of our assessment. As previously shown, debriefing is an essential part of simulation-based training (33,34). During the assessment sessions, debriefing was shorter than during the training sessions. However, it was appreciated by the students, because it was the first time they were provided individual feedback immediately after an assessment. However, for simulations practice, it could be viewed as a short, weak debriefing and could participate to the unfair perception of simulation-based assessment. A possible improvement would be to give each student a personal feedback, with individual improvement axes (35).
Another ethical concern is the stress linked to the assessment process. Simulation-based training, is supposed to be a safe environment, to learn with the possibilities to make errors, and learn from these. It has been shown to be a stressful environment (36). If we add stress to assessment, it could deflect SBT from one of his important aims: the safe learning.
For these different reasons caution should be applied to simulation-based assessment for medical students and we should improve our simulation tools and environments.
Limits
This study presents some limitations. First, it’s a single-center study but with two different simulation-based assessment and many hospital clinical rotation from which students didn’t have the same experiences. Second, we couldn’t question teachers although it was initially part of our project. They refused to participate to focus groups or individuals interviews, except for informal ones. They deplored the lack of time to participate. Finally, to analyze the two activity systems, we should have conducted observations at the hospital, and interviews with other actors from the medical students training, such as the senior lecturers, the residents and the medical school teachers.