Our results reveal that the OSCE is a good method to evaluate PBLS skills in medical students and detect the CPR manoeuvres in which they have most difficulty. The OSCE is an objective, fast, reproducible and simple method to evaluate with prior preparation. It has been suggested that the stress of the OSCE examination may mean that students’ performance is lower and does not properly reflect their skills.[24] However, the stress undergone in an actual CP situation is greater, whereby the stress of the test may even increase its validity for evaluation at the CPR station. [24, 25] Some authors have proven that prior preparation for the OSCE [26] and training with simulated clinical situations [27] reduce stress and improve performance.
However, the OSCE enables evaluating the efficacy of improvement measures in CPR teaching, as occurred in our study.
Our study reveals that, although all students had attained sufficient capacity in the immediate evaluation for CPR training at three months, more than 10% of medical students do not manage to undertake proper PBLS. These data coincide with those found by other authors revealing that practical CPR skill quickly falls if CPR is not kept up to date. This makes setting out frequent refresher activities mandatory. [28–30] The OSCE performed at various times during the medicine undergraduate degree may serve to verify the results of refresher activities for the training.
However, our results show that CPR training by means of a structured PILS course attains more robust results in PBLS than exclusive training in PBLS. There is no clear consensus on the level of training in PLS that medical students should receive. Although in large part paediatric training universities is only a complementary part of the general training in CPR.[12, 14–17] Training in PILS requires more time, more resources and teaching. However, it is very well evaluated by students and attains a higher level of training. In our experience PLIS training in medical students is feasible and attains better skills.[17] However, regardless of the level of training taught, it is essential to undertake refresher and maintenance activities. [28–30]
Evaluation of improvement measures
Our results reveal that improvement activities in CPR training, increasing the practical exercise time and strengthening skills that students are worse at learning or forget attain a significant improvement in skills. Therefore, the OSCE may not only be used for the evaluation of student skills but also to evaluate the training model.
However, the OSCE should not only be an evaluation instrument but rather it should have a training function. [31] For this reason we include a succinct evaluation with the student at the end of the training to correct and strengthen knowledge.
Limitations
First, one possible limitation during the OSCE evaluation is the variability between individuals in evaluators’ criteria. Ensuring homogeneity of evaluators’ criteria is not easy. Four evaluators acted during both years and the remainder were different and the existence of bias cannot be ruled out. However, all evaluators were accredited paediatric CPR instructors and received specific training in the OSCE evaluation. To limit individual variability some authors have proposed the existence of two evaluators in each station [32], although this means a significant number of evaluators especially when the OSCE is simultaneous for many students.
Yeates has devised a method that includes a videorecording of training and its evaluation by several evaluators “Video-based Examiner Score Comparison and Adjustment” (VESCA) [33]. This may improve the intervariability although this also means more work for evaluators.
Second, the case studies set out in the stations for both years were different and this could in part account for the differences in results. Some manoeuvres such as opening the airway may be more complicated in children with trauma, but the remainder are similar. However, CPR manoeuvres in children are no more complicated than in the breastfeeding infant.
Third, the verification list system used in the OSCE evaluation has the disadvantage that it only classifies the action in each item as suitable or unsuitable and does not enable a greater discrimination in terms of different degrees of compliance. This was the g scoring system for the entire OSCE and did not enable its switch for the PBLS station. In our opinion a scoring system into five levels (e.g., very good: 5 points, good: 4 points, sufficient: 3 points, poor: 2 points, very poor: 1 point, not performed 0 points, which is the one we use for the PBLS evaluation, helps to better discriminate students’ skills, although it requires more time and may likely create greater discrepancies among the examiners. Other authors propose a blend of verification lists and evaluation scales, mainly to evaluate complex skills.[32]
Finally, in our study an evaluation of skills more long term has not been performed to evaluate their staying up to date. Although, as discussed, various studies have revealed that without refresher courses these skills gradually fall over time. [28–30]