The most significant results obtained in our study are the nursing competency acquisition through clinical simulation by nursing students and the different level of their satisfaction with this methodology depending on the evaluation strategy employed.
Firstly, professors in this study verified most students acquired the nursing competencies to resolve each clinical situation. This result confirms the findings in other studies that have demonstrated nursing competency acquisition by nursing students through clinical simulation [28, 29], and specifically nursing competencies related to critical patient management [8, 30].
Secondly, students’ satisfaction assessed using both evaluation strategies could be considered high in most items of the questionnaire, regarding their mean scores. The high level of satisfaction expressed by nursing students with clinical simulation obtained in this study is also congruent with empirical evidence, which confirms that this methodology is a useful tool for their learning process [5, 25, 31–34].
However, satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance. Reduced time is a frequent complaint of students in OSCE [21, 35] and clinical simulation methodology [4, 5, 9]. Professors, registered nurses, and clinical placement mentors tested all simulated scenarios and their checklist in this study. They checked the time was enough for its resolution. Another criticism of summative evaluation is increased anxiety. However, several studies have demonstrated during clinical simulation students’ anxiety increase [36, 37] and it is considered as the most disadvantage of clinical simulation [1–9]. In this sense, anxiety may influence negatively students’ learning process [36, 37]. Although the current simulation methodology can mimic the real medical environment to a great degree, it might still be questionable whether students´ performance in the testing environment really represents their true ability. Test anxiety might increase in an unfamiliar testing environment; difficulty to handle unfamiliar technology (i.e., monitor, defibrillator, or other devices that may be different than the ones used in the examinee’s specific clinical environment) or even the need to ‘act as if’ in an artificial scenario (i.e., talking to a simulator, examining a ‘patient’ knowing he/she is an actor or a mannequin) might all compromise examinees' performance. The best solution to reduce these complaints is the orientation of students to the simulated environment [9, 19–21].
Both SBA strategies allow educators evaluating students’ knowledge and apply it in a clinical setting. However, formative evaluation is identified as ‘assessment for learning’ and summative evaluation as ‘assessment of learning’ [38]. Using formative evaluation, educators' responsibility is to ensure not only what students are learning in the classroom, but also the outcomes of their learning process [39]. In this sense, formative assessment by itself is not enough to determine educational outcomes [40]. Consequently, a checklist for evaluating students’ clinical performance was included in MAES© sessions. Alternatively, educators cannot make any corrections in students’ performance using summative evaluation [39]. Gavriel [38] suggests providing students feedback in this SBA strategy. Therefore, a debriefing phase was included after each OSCE session in our study. The significance of debriefing recognised by nursing students in our study is also congruent with the most evidence found [12, 14, 15, 41]. Nursing students appreciate feedback about their performance during simulation experience and, consequently, debriefing is considered as the most rewarding phase in clinical simulation by them [4, 5, 42]. In addition, nursing students in our study expressed they could learn from their mistakes in debriefing. Learn from error is one of the most advantages of clinical simulation shown in several studies [4, 5, 43] and mistakes should be considered learning opportunities rather than there being embarrassment or punitive consequences [44].
Furthermore, nursing students who participated in our study considered the practical utility of clinical simulation as another advantage of this teaching methodology. This result is congruent with previous studies [4, 5]. Specifically, our students indicated this methodology is useful to bridge the gap between theory and practice [45, 46]. In this sense, clinical simulation has proven to reduce this gap and, consequently, it has demonstrated to shorten the gap between classrooms and clinical practices [4, 5, 45, 46]. Therefore, as this teaching methodology relates theory and practice, it helps nursing students to be prepared for their clinical practices and future careers. According to Benner’s model of skill acquisition in nursing [47], nursing students become competent nurses through this learning process, acquiring a degree of safety and clinical experience before their professional careers [48]. Although our research indicates clinical simulation is a useful methodology for the acquisition and learning process of competencies mainly related to adequate management and nursing care of critically ill patients, this acquisition and learning process could be extended to most nursing care settings and its required nursing competencies.
Limitations and future research
Although checklists employed in OSCE have been criticized for their subjective construction [9, 19–21], they were constructed with the expert consensus of nursing professors, registered nurses and clinical placement mentors. Alternatively, the self-reported questionnaire used to evaluate clinical simulation satisfaction has strong validity. All simulated scenarios were similar in OSCE and MAES sessions (same clinical situations, patients, actors and number of participating students), although the debriefing method employed after them was different. This difference was due to reduced time in OSCE sessions. Future research should combine formative and summative evaluation for assessing clinical performance of undergraduate nursing students in simulated scenarios.