Numerous resuscitation teaching programs exist, ranging from community and workplace training to advanced certifications. Standardised programs such as basic life support (BLS) and advanced life support (ALS) aim to equip learners to respond to patients in their immediate period of crisis1,2. Despite achieving certification, competency levels among graduates diminish post-training, with many failing to retain the standards required when they are re-evaluated3–9. Clinicians are also often unprepared for the complexities of out-of-hospital end-of-life events10,11, a concern amplified by low cardiac arrest survival rates in pre-hospital settings11. To combat these issues, the European Resuscitation Council (ERC) urges learning designers to consider contemporary pedagogical theories and effective assessment practices in program development12, prompting a discourse about common training methodologies.
Rote learning drills versus real world events
Resuscitation training is typically approached as a discreet area of curriculum or a standalone short intensive course. Commercial ALS and BLS programs often deliver this over two sequential days2, whereas pre-employment education for doctors, nurses and paramedics typically dedicates part or whole modules of curricula to the theme. Although expectations vary between civilian and health professional practices, courses share several traits. As current evidence emphasises the influence of high-quality cardio-pulmonary resuscitation (CPR) on survival outcomes, student memorisation of these steps dominates teaching12. Courses are constructed around a CPR practice algorithm, with decisions about student competency based on the learner’s ability to demonstrate exacting renditions of rehearsed drills. The benchmarks for performance predominantly reflect technical skills, with mastery described in terms of an ability to deliver a shock within 180 seconds with greater than 80% accuracy in depth and rate of chest compressions13. Grading decisions reside solely with assessors who apply standardised check-box rubrics to quantify performances in line with a single correct outcome. Reflection on practice has been a relatively recent addition, however, this frequently occurs only after grading has already been completed14. Programs culminate with high stakes summative testing where students attempt near identical scenarios that require them to demonstrate a deconstructed passage of practice. Concerns about the disconnect which exists between CPR training conventions and education best practice are not new15. This paper argues that in addition to the pedagogical deficiencies that may be contributing to poorly sustained skills, the validity and reliability of training methods also warrants scrutiny15.
Real-world resuscitation cases significantly differ from their classroom counterparts that arbitrarily assign scores to performances. Resuscitation is a profoundly human event which typically involves intricate social, legal, professional, and operational challenges beyond mere CPR execution. These events demand sophisticated patient-centred care, empathetic management, and often engage various parties in unforeseen situations16,17. Unlike the homogenous classroom tests, real-life incidents are unpredictable, and no two cases are ever the same, with clinicians receiving no prior alert about the timing or the required expectations14.
This paper frames the resuscitation education dilemma through contemporary evidence relating to learning for the longer term, authentic education, and assessment quality. The following section of this paper briefly introduces the principles of authentic and sustainable assessment, entrustable professional activities and the Ottawa consensus on good assessment as a backdrop to the research.
Authentic and sustainable assessment
A single test is universally considered insufficient to determine something as complex as clinical competence18, a notion at odds with single high-stakes barrier testing. Where testing focus is solely on 'signing off' achievements, these represent an assessment 'of' learning—essentially, a measure of rote recollection. Such methods typically mark a terminal point of a program, neglecting the potential opportunities to extend the utility of assessment 'for' and 'as' a learning driver, which both prioritise learner interests beyond the test19.
In contrast to the high-stakes summative sign-off philosophy are the authentic assessment and sustainable assessment paradigms. Authentic assessment (AA) reflects an understanding that the real world is complex and will demand that students are capable of responding to tasks in a range of justifiable ways20. In order to prepare students for their future roles, the principles of AA require assessment to simulate the real-world and test a student’s ability using the ambiguities and complexities of real life21. Complementary to this, sustainable assessment (SA) is concerned with testing practices that foster learning for the longer term22. Sustainable assessment literature is critical of temporary learning gains often associated with summative practices which encourage students to binge learn to simply pass a test23,24. Where the focus of learning is limited to the recall of facts, higher orders of student learning remain untouched25. Sustainable assessment recognises the importance for students to exercise their own decision making about their work24, an action which employs the higher orders of student capabilities.
Entrustable professional activities
Supervisor trust is pivotal to clinician success. Prior learning milestones, or skill sign offs, however, are no guarantee that students will be trustworthy to apply these in real-life26–28. Whereas assigning competency usually reflects the breaking down of practice to incremental features which can be measured objectively, assigning trust is often tacit and influenced by gut feelings about a holistic picture of evidence26. Entrustable professional activities (EPAs) offer a viable alternative to reductionist competency-based assessments26,29. EPAs emphasise that effective practice requires the integration of multiple hard-to-separate competencies26–30. EPAs replace itemised competency criteria with holistic frameworks for evaluating practice, with success determinations hinging on the level of trust a student commands to work unsupervised31,32. These frameworks encapsulate the broad scope of knowledge, skills, and attitudes essential for professional practice26. Encouraging learner self-awareness, EPAs are also associated with long-term learning31. Recent findings also offer a strong level of correlation between EPAs and the Ottawa criteria for good assessment29.
Ottawa good assessment criteria
The theoretical assessment positions described above represent only a portion of key contributions to the broad literature relating to assessment quality. The Ottawa criteria for good assessment practice, developed during a biannual medical education congress, offers comprehensive guidance on optimal assessment design33. These consider assessment quality in terms of the following criteria.
Validity
There is a body of evidence that is coherent (‘‘hangs together’’) and that supports the use of the results of an assessment for a particular purpose.
Reproducibility or consistency
The results of the assessment would be the same if repeated under similar circumstances.
Equivalence
The same assessment yields equivalent scores or decisions when administered across different institutions or cycles of testing
Feasibility
The assessment is practical, realistic, and sensible, given the circumstances and context
Educational effect
The assessment motivates those who take it to prepare in a fashion that has educational benefit.
Catalytic effect
The assessment provides results and feedback in a fashion that creates, enhances, and supports education; it drives future learning forward.
Acceptability
Stakeholders find the assessment process and results to be credible.
(Norcini, et al, 2011 pp 210–211)
Given this evidence, it is imperative to reassess student competencies in responding to and managing real-life resuscitation events. Testing should encompass the true range of challenges that typically accompany and can complicate the isolated delivery of CPR skills and strict algorithm adherence. Moreover, these competencies should align with professional expectations.
Study aims and research questions
This study sought to evaluate paramedic student competencies in managing real-life resuscitation events, featuring the range of professional and operational variables typically accompanying CPR algorithms. We aimed to assess practice in line with more realistic expectations of the profession. The study addressed the following research questions:
-
Does traditional ALS training adequately prepare students to respond to simulated scenarios based on actual case events?
-
Can the testing approaches be reconfigured to better align with evidence on assessment quality and sustainable learning?
-
Can employing a work based entrustable professional activity approach provide a valid and reliable alternative to existing grading conventions?