This study showed a positive perception in terms of NTS learning after crisis management training by high-fidelity simulation with no difference when using or not a NTS-based OT among residents who observed scenarios. No significant difference was also found in the assessment of knowledge. Likewise, a positive and similar perception was observed in terms of learning stress management, improving skills to manage a crisis, satisfaction and the professional impact of the training.
Some studies have tried to improve the learning of observers during simulation sessions using tools [8, 24, 25]. Using an observer tool is believed to allow more active learning[15]. The literature shows that active learning facilitates attention during training and improves students’ performance [26]. The improved learning during active observation can be explained by the attention boost effect [27], a theory which suggests that when two actions are performed simultaneously, attention is then increased, and even more when the different elements to be observed are frequent [28]. Implementation of almost all NTS was indeed necessary in each of our crisis scenarios and using the OT could have made it possible to reinforce the learning of these items. In our study, although observers had a positive feeling in terms of NTS learning, we could not demonstrate any beneficial effect on learning these skills when using a NTS-based OT. To our knowledge, this study is the only one which randomly studied the specific impact of an observer tool on the learning of NTS. In 2012, Kaplan et al. [8] provided observers with a "critical action checklist" observation grid including a set of technical and non-technical actions to improve patient care but all observers used the observer tool and the post-test evaluation which was carried out by a questionnaire based on non-technical skills did not display any difference in the average score.
Only the study by Stegmann et al. [24] studied the impact of an observer tool in a comparative study among observers. In this study, 200 medical students were trained with a sham patient with rectal bleeding and abdominal pain. The observers used or not a checklist targeting technical skills (performing a rectal examination) and non-technical skills (patient information, doctor-patient relationship). For each skill thirteen items were defined and the observer had to judge whether they had been performed correctly or not. A significant improvement in knowledge relating to doctor-patient communication was recorded among observers equipped with the observer tool but unfortunately the study was non-randomized.
A previous work [19] randomized anesthesia residents to use an observer tool when not role-playing. The observer tools were based on crisis cognitive aids (i.e. emphasizing technical skills and medical knowledge). This study showed an increased acquisition of medical knowledge skills when using an observer tool. The acquisition of non-technical skills was also assessed using the same self-assessment than this study (secondary endpoint). As in the current study, no significant difference was shown in the acquisition of non-technical skills and absolute values were in the same range. Thus, compared to a technical-medical knowledge OT, the use of a non-technical OT under similar conditions had no effect on the learning of NTS. This could be explained by the fact that a non-technical OT seems more abstract than a technical OT. Indeed, this is an area which is not much taught in initial training as the importance of NTS in professional practice has been recognized only recently. In addition, the contribution of a non-technical OT as such could be less useful for learning non-technical skills because the debriefing, in which all residents participate, frequently emphasizes non-technical skills.
A greater satisfaction score is often obtained with simulation training but the value of this outcome is debated. In our study, satisfaction ratings were very high (> 9/10), but not significantly different. The use of an OT therefore had no impact on satisfaction. This result agrees with our previous study in which satisfaction was similarly high in the two groups[19]. In the study by Hober [25] observing learners reported great satisfaction but in this study satisfaction was not measured objectively.
Regarding the change in professional practices, observers had a similar and very favorable perception (> 8/10) whether or not they had used the observation grid. This result is also in agreement with our previous work [19]. This lack of significant difference could be explained by a measurement that may be done too early. Indeed, as the immediate self-assessment was being carried out at the end of the day, projection into the future is not easy and awareness of the change in professional practice may only occur after having been again exposed to a situation requiring the use of NTS.
Regarding the assessment of knowledge (level 2 according to Kirkpatrick), our study found no significant difference whether or not learners had used the observation grid. Likewise, a positive and similar perception was observed in terms of learning to manage stress and skills to manage a crisis. These results are in contradiction with those of our previous study but this can easily be explained by the fact that the technical skills were not addressed in the present study [19].
The literature regarding the use of tools to increase observer learning is limited [8, 15, 24, 25],[19] and research must continue to define their pedagogical value. As shown above, the design of the studies often remains of limited quality, making interpretation still uncertain.
The strengths included the fact that our study was carried out prospectively and randomized and that we used the ANTS grid which is one of the scores which have been well validated[29]. However, it has also several limitations. One of the first limitations is the use of self-assessment. However, it would have been difficult to set up a study design in which external evaluation could have been used. Although the ANTS scoring system is well validated and widely used [29], this scale is however complex to apply even after specific training 26. Another limitation is the lack of assessment of NTS before training. In order to study the impact of a measure on learning, a pre-test evaluation is the reference method. The greater the variation between the pre and post-test assessment, the more effective the action is. However, we were unable to perform a pre-test assessment and we assumed that the residents had the same level of NTS at the start as they had the same previous clinical experience. Moreover, because of monocentric study, the results weren’t generalizable. Finally, our study assessed the perception of learning NTS with or without using an OT immediately after training. A remote assessment could also have been of interest to assess knowledge retention.