Design
This systematic review protocol was developed in according with the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) [10] statement (Additional file 1). The planned systematic review will be reported in accordance with the PRISMA statement [11] and registered on the International Prospective Register of Systematic Reviews (PROSPERO), (CRD42021244530).
Eligibility criteria
Study selection will be guided by eligibility criteria and the population, intervention, comparison, outcome (PICO) pneumonic [12].
Types of Studies
Primary quantitative research studies published in English from 1 January 1990 will be considered. There has been a sustained increase in the volume of research focused on SBE since the 1990s related to the affordability of high-fidelity simulators [5]. Relevant quantitative designs including randomised controlled trials and quasi-experimental studies with a control group for comparison will be included.
Population
The population of interest is undergraduate nursing students, aged 18 years or over, engaged in SBE in an academic setting, such as a university.
Intervention
SBE is the intervention of interest. In general terms, SBE is a teaching technique used to enhance a learning setting to appear like the real world. In healthcare, there are a variety of design elements that can impact on the effectiveness of SBE that in undergraduate nursing education settings is usually conducted in a physical environment with face-to-face teaching. Elements include devices such as manikins or task trainers, that are used to allow students to interact in a manner that represents real clinical practice.
Fidelity refers to the realism of the simulation environment [13] and low, medium, or high-fidelity environments are elements of simulation that can influence learning outcomes. Simulated patients, including trained actors, and/or integrated simulators that incorporate technology influence fidelity and as such the realism of the interaction between the learner and the manikin. In addition, adjustment of manikin’s haemodynamic observations based on learner decision making or conversing with the learner through built in speakers to simulate a patient conversation [14] are common elements. Task trainers are another element of simulation designed to mimic a part of the patient’s body such as a manikin arm for intravenous insertion or something as simple as an orange, to practice injections
Comparator
Conventional education modalities such as didactic lectures, passive (one-way) classroom teaching, or small group seminars will provide a comparative control cohort.
Outcome
The primary outcome will be SBE effect. This effect will be measured by assessing at least one measure of knowledge, skills or attitude as an endpoint. For the purposes of this review, knowledge is defined as learnt information (eg, theoretical knowledge relating to the intended learning outcome of the simulation activity) acquired within the simulation activity. The measurement of knowledge will be determined by academic outcome assessment. Skills are defined as the ability to develop psychomotor function aligned with the successful performance of a particular procedure (eg, change a wound dressing), and attitude is defined as how worthwhile the learner believes the activity is in relation to their learning [15].
Exclusion Criteria
Given the primary aim of the review is to assess the effect of SBE, that necessitates the need for a comparator group, case control, cohort, cross-sectional, and single group observational studies will be excluded. Literature, narrative, integrative mixed methods and systematic reviews, observational cohort studies, abstracts, letters, commentary, editorials, opinion pieces and grey literature will be excluded. SBE can use modified realities such as augmented reality and virtual reality. These emerging digital modalities have a limited body of evidence and are a developing area of practice [16] so beyond the scope of this review. The use of pre-simulation interventions such as online learning modules or smart device technologies may influence the relationship between SBE and traditional learning so will be excluded. Similarly, studies that combine low fidelity SBE elements with traditional education approaches and compare these to medium or high fidelity SBE will be excluded. Postgraduate nursing students, midwifery students or students receiving SBE in a clinical setting (eg, a hospital) are excluded. Manuscripts published in languages other than English will be excluded. This is an unfunded study, so translation costs are beyond the investigator’s capacity.
Information Sources & Search Strategy
Databases to be searched from 1st of January 1990 include the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Medical Literature Analysis and Retrieval System Online (MEDLINE), American Psychological Association (APA) PsycInfo and the Education Resources Information Centre (ERIC) via the EBSCO host platform. The Excerpta Medica database (EMBASE) will be searched via the OVID platform. The MEDLINE search strategy is included in Table 1.
Table 1
EBSCO MEDLINE search strategy
|
MeSH Headings & Search Terms
|
1
|
MH “students, nursing”
|
2
|
MH “Education, Nursing, baccalaureate”
|
3
|
Undergrad*
|
4
|
College*
|
5
|
Student*
|
6
|
University*
|
7
|
1 OR 2 OR 3 OR 4 OR 5 OR 6
|
8
|
MH “Simulation training+”
|
9
|
Simulation N3 training
|
10
|
Simulation N3 education
|
11
|
Simulation N3 patient
|
12
|
8 OR 9 OR 10 OR 11
|
13
|
Nurs*
|
14
|
7 AND 12 AND 13
|
Primary quantitative studies included in systematic reviews captured by the search strategy and studies identified through secondary searching relevant study reference lists will also be eligible for inclusion. Specific search terms will be developed in Medline with assistance from a senior librarian experienced in the conduct of systematic reviews, using text words and subject headings. Primary search terms include ‘nursing students’, ‘simulation training, ‘patient simulation’, and ‘immersive simulation’ with common Boolean operators as illustrated in Table 2. Each database will be searched using these broad terms with either MeSH or Emtree terms with appropriate permutations.
Table 2
Concept 1
|
Concept 2
|
Concept 3
|
MH “students, nursing”
|
MH “Simulation training+”
|
Nurs*
|
OR
|
OR
|
|
MH “Education, Nursing, baccalaureate”
|
Simulation N3 training
|
|
OR
|
OR
|
|
Undergrad*
|
Simulation N3 education
|
|
OR
|
OR
|
|
College*
|
Simulation N3 patient
|
|
OR
|
|
|
Student*
|
|
|
OR
|
|
|
University*
|
|
|
NB: Each concept will be combined with “AND” |
Data Management, Selection & Screening
Database search results will be imported into reference management software Endnote© VersionX9 and uploaded into Covidence. Covidence is a tool for effective collaborative title and abstract screening, full text screening, data abstraction and quality assessment [17]. Covidence automatically identifies, sorts, and removes duplicate studies. Two reviewers (MJ & RW) will independently screen title and abstract of citations with a third reviewer (LM) available to moderate disagreements with a view to reaching consensus. Articles that meet eligibility criteria will be sourced for full text review. Two reviewers (MJ & RW) will complete full text screening with a third (LM) available to moderate disagreements to achieve consensus. Screening outcomes will be documented and reported in a PRISMA flow diagram [11].
Data Extraction
Full text data extraction will be undertaken in duplicate by two reviewers (MJ & LB) and conflicts resolved with arbitrary discussion. Data extraction will take place using a modified version of the existing Covidence data extraction template. Study characteristics to be recorded include publication details (authors, publication year), demographic characteristics, methodology, intervention and comparator group details and reported outcomes. To ensure consistency between reviewers, periodic meetings will be concurrently held during the screening process to resolve disagreements by discussion. A third reviewer (LM) will act as adjudicator in the event of an unresolved agreement.
Data items
The following data items will be extracted from selected studies: study setting; sample; inclusion and exclusion criteria; aim; design element/s employed; unit of allocation; start and end dates of the study; duration of participation; baseline group differences; frequency and duration of the intervention; outcome measured (eg, knowledge acquisition; skill improvement, attitude and satisfaction); tool used to measure outcome; validity of the tool/s; comparator group education method; sustainability of outcome/s.
Outcomes & Prioritization
The primary outcome is SBE effect measured by assessing at least one measure of knowledge, skills or attitude as an endpoint. Outcomes will be compared between groups who do and do not participate in a SBE intervention. Secondary outcomes include describing variability in SBE design elements and sub-group analyses to explore the interplay between SBE elements and learning outcomes. In the case of discrepancies in outcomes reported, contact with corresponding authors will be attempted by email to obtain relevant data.
Risk of bias in individual studies
Each randomized trial will be assessed for possible risk of bias using the revised Cochrane Collaboration tool for assessing risk of bias (RoB 2). This tool focuses on trial design, conduct, and reporting to obtain the information relevant to risk of bias [18]. Based on the answers provided within the tool, trials will be categorised as ‘low’ or ‘high’ risk of bias. If there is disagreement, a third author will be consulted to act as an arbitrator. If a study has a high risk of bias, it will be excluded from the review analyses.
The Risk Of Bias In Non-Randomized Studies – of Interventions (ROBINS-I) tool will be sued for bias assessment in studies with this design. The ROBINS-I tool shares many features with the RoB 2 tool as it focuses on a specific result, is structured into a fixed set of domains of bias, includes signalling questions that inform risk of bias judgements and leads to an overall risk-of-bias judgement [19].
Quality Appraisal
Two authors (MJ & LB) will independently assess each article for quality using the Mixed Methods Appraisal Tool (MMAT) [20]. This critical appraisal tool allows the appraisal of five categories of studies including qualitative research, randomised controlled trials, non-randomised studies, quantitative descriptive studies, and mixed methods studies. As recommended, each study will be reviewed by two authors by completing the appropriate study categories identified within the MMAT. An overall score will not be used to rate the quality of study, instead a sensitivity analysis will be completed to consider the quality of studies by contrasting their results as recommended. Low quality papers will be excluded from review analyses.
Data Synthesis
The primary unit of analysis will reflect each endpoint measure; knowledge acquisition, skill improvement or attitude. Dichotomous outcomes will be extracted and analysed using odds ratio (OR) with 95% confidence interval (CI) and continuous outcomes will be represented using mean difference (MD), or standardized mean difference (SMD) when outcomes are measured using different scales, with 95% CI. In the event of missing data, we will attempt to contact the primary authors to obtain relevant information. Meta-analysis will be undertaken if two or more studies with comparable design and outcome measures meet eligibility criteria and have low risk of bias as defined by the Cochrane Effective Practice and Organisation of Care (EPOC) criteria [21]. Pooled data will be analysed using the DerSimonian and Laird method for random-effects models in RevMan [22]. This model is used when reported outcome effects differ amongst studies but follow some similar distribution [23]. Findings from meta-analysis will be illustrated using forest plots.
The I2 test of heterogeneity will be used to determine the level of variation related to diversity rather than chance. Rather than simply stating whether heterogeneity is present or not, it will be measured using I2 test [24]. Low heterogeneity will be reflected as an I2 result between 0–40%; moderate heterogeneity between 30–60%; substantial heterogeneity between 50–90%; and considerable heterogeneity between 75–100% [25]. If a high heterogeneity level (I2 > 50%) among trials exist, study design and characteristics will be reported, and sensitivity analyses conducted to reduce variability with a view to being able to undertake meta-analysis. It is assumed that specific design elements will underpin the need for subgroup analyses. If data are not suitable for meta-analysis findings will be presented descriptively in the form of a narrative synthesis according to categories outlined in the SWiM guideline [26]. Due to the potential of a large amount of data that could be conveyed textually, content will be sequenced to follow the same structure to ensure consistency across results [27].
Meta-bias
Study selection for this review will be guided by the PICO pneumonic and the framework outlined in this protocol to reduce risk of bias. Hand searching relevant systematic reviews will contribute to a reduction in publication bias [28]. To reduce bias associated with selecting studies, two independent reviewers will be used throughout the screening and data collection process [29]. To address bias when synthesising studies, this protocol has been registered prospectively to promote transparency and replicability [10]. Two reviewers will appraise the quality of studies and low-quality studies will be excluded to avoid inappropriate influence on results [10].
Confidence in cumulative evidence
The overall effectiveness of simulation and the impact of design elements on undergraduate nurses will be assessed with the Grades of Recommendation, Assessment, Development and Evaluation Working Group (GRADE) approach. The GRADE approach allows an assessment of the quality of the body of evidence for each individual outcome. Quality of evidence is dependent on within-study risk of bias, directness of evidence, heterogeneity, precision of effect estimates and risk of publication bias [23]. Two authors will independently assess the quality of evidence regarding the effectiveness of simulation on each outcome. All disagreements will be resolved through discussion and consensus. Quality of evidence will be categorised as high, moderate, low and very low using EPOC [21] definitions.