The Impact of Design Elements on Undergraduate Nursing Students’ Educational Outcomes in Simulation Education: Protocol for a Systematic Review



Background: Although simulation-based education (SBE) has become increasingly popular as a mode of teaching in undergraduate nursing courses its effect on associated student learning outcomes remains ambiguous. Educational outcomes are influenced by SBE quality that is governed by technology, training, resources and SBE design elements. This paper reports the protocol for a systematic review to identify, appraise and synthesise the best available evidence regarding the impact of SBE on undergraduate nurses’ learning outcomes.

Methods: Databases to be searched from 1st of January 1990 include the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Medical Literature Analysis and Retrieval System Online (MEDLINE), American Psychological Association (APA) PsycInfo and the Education Resources Information Centre (ERIC) via the EBSCO host platform. The Excerpta Medica database (EMBASE) will be searched via the OVID platform. We will review the reference lists of relevant articles for additional citations. A combination of search terms including ‘nursing students’, ‘simulation training, ‘patient simulation’, and ‘immersive simulation’ with common Boolean operators will be used. Specific search terms will be combined with either MeSH or Emtree terms and appropriate permutations for each database. Search findings will be imported into reference management software (Endnote© Version.X9) then uploaded into Covidence where two reviewers will independently screen titles, abstracts and retrieved full text. A third reviewer will be available to resolve conflicts and moderate consensus discussions. Quantitative primary research studies evaluating the effect of SBE on undergraduate nursing students’ educational outcomes will be included. The Mixed Methods Appraisal Tool (MMAT) will be used for quality assessment of core criteria, in addition to the Cochrane RoB 2 and ROBINS-I to assess risk of bias for randomised and non-randomised studies, respectively. Primary outcomes are any measure of knowledge, skills, or attitude.

Discussion: SBE has been widely adopted by healthcare disciplines in tertiary teaching settings. This systematic review will reveal (i) the effect of SBE on learning outcomes, (ii) SBE element variability, and (iii) interplay between SBE elements and learning outcome. Findings will specify SBE design elements to inform the design and implementation of future strategies for simulation-based undergraduate nursing education.

Systematic Review Registration: PROSPERO CRD42021244530


Simulation-based education (SBE) is a popular mode of teaching in undergraduate nursing education [1]. This pedagogical approach provides students with an opportunity to practice non-technical skills (eg, communication, teamwork, problem solving) and clinical skills (eg, venepuncture), in a controlled environment that poses no risk of harm to patients and allows the possibility of direct feedback and guidance from teaching staff [2]. Educational outcomes are influenced by SBE quality that is governed by technology, training, resources and SBE design elements [3]. As SBE has evolved, its design elements have been adjusted to overcome learning barriers such as excessive student anxiety [4]. SBE has also been tailored to suit a variety of settings, populations, timeframes and session frequency [5].

SBE designs include tag team simulation, simulated virtual and augmented environments and game-based themes within traditional simulation, to accommodate ever expanding university enrolments whilst maintaining student experience and satisfaction [6]. In the context of evolving SBE the level of evidence for SBE effectiveness in nursing education is diverse and educational outcomes remain ambiguous [7]. SBE can refer to a low fidelity task trainer that takes five minutes to complete a skill while working one-on-one with a facilitator, or a complex high-fidelity immersive scenario involving a group of students performing complex tasks over a prolonged period of time [8]. The pedagogy of SBE generally fails to acknowledge complex variability in design elements [9] that has implications for learning outcomes. As such, there is a need to evaluate the effect of SBE on learning, and to describe SBE design elements to determine whether specific elements have a superior effect on learning outcomes. The aim of this systematic review is to identify, appraise and synthesise the best available evidence regarding the effectiveness of SBE and the impact of design elements on undergraduate nurses’ learning outcomes. To address this aim there are three review objectives:

  1. to determine the effect of SBE on learning outcomes,

  2. to describe variability in SBE elements, and

  3. to explore the interplay between SBE elements and learning outcomes.



This systematic review protocol was developed in according with the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) [10] statement (Additional file 1). The planned systematic review will be reported in accordance with the PRISMA statement [11] and registered on the International Prospective Register of Systematic Reviews (PROSPERO), (CRD42021244530).

Eligibility criteria

Study selection will be guided by eligibility criteria and the population, intervention, comparison, outcome (PICO) pneumonic [12].

Types of Studies

Primary quantitative research studies published in English from 1 January 1990 will be considered. There has been a sustained increase in the volume of research focused on SBE since the 1990s related to the affordability of high-fidelity simulators [5]. Relevant quantitative designs including randomised controlled trials and quasi-experimental studies with a control group for comparison will be included.


The population of interest is undergraduate nursing students, aged 18 years or over, engaged in SBE in an academic setting, such as a university.


SBE is the intervention of interest. In general terms, SBE is a teaching technique used to enhance a learning setting to appear like the real world. In healthcare, there are a variety of design elements that can impact on the effectiveness of SBE that in undergraduate nursing education settings is usually conducted in a physical environment with face-to-face teaching. Elements include devices such as manikins or task trainers, that are used to allow students to interact in a manner that represents real clinical practice.

Fidelity refers to the realism of the simulation environment [13] and low, medium, or high-fidelity environments are elements of simulation that can influence learning outcomes. Simulated patients, including trained actors, and/or integrated simulators that incorporate technology influence fidelity and as such the realism of the interaction between the learner and the manikin. In addition, adjustment of manikin’s haemodynamic observations based on learner decision making or conversing with the learner through built in speakers to simulate a patient conversation [14] are common elements. Task trainers are another element of simulation designed to mimic a part of the patient’s body such as a manikin arm for intravenous insertion or something as simple as an orange, to practice injections


Conventional education modalities such as didactic lectures, passive (one-way) classroom teaching, or small group seminars will provide a comparative control cohort.


The primary outcome will be SBE effect. This effect will be measured by assessing at least one measure of knowledge, skills or attitude as an endpoint. For the purposes of this review, knowledge is defined as learnt information (eg, theoretical knowledge relating to the intended learning outcome of the simulation activity) acquired within the simulation activity. The measurement of knowledge will be determined by academic outcome assessment. Skills are defined as the ability to develop psychomotor function aligned with the successful performance of a particular procedure (eg, change a wound dressing), and attitude is defined as how worthwhile the learner believes the activity is in relation to their learning [15].

Exclusion Criteria

Given the primary aim of the review is to assess the effect of SBE, that necessitates the need for a comparator group, case control, cohort, cross-sectional, and single group observational studies will be excluded. Literature, narrative, integrative mixed methods and systematic reviews, observational cohort studies, abstracts, letters, commentary, editorials, opinion pieces and grey literature will be excluded. SBE can use modified realities such as augmented reality and virtual reality. These emerging digital modalities have a limited body of evidence and are a developing area of practice [16] so beyond the scope of this review. The use of pre-simulation interventions such as online learning modules or smart device technologies may influence the relationship between SBE and traditional learning so will be excluded. Similarly, studies that combine low fidelity SBE elements with traditional education approaches and compare these to medium or high fidelity SBE will be excluded. Postgraduate nursing students, midwifery students or students receiving SBE in a clinical setting (eg, a hospital) are excluded. Manuscripts published in languages other than English will be excluded. This is an unfunded study, so translation costs are beyond the investigator’s capacity.

Information Sources & Search Strategy

Databases to be searched from 1st of January 1990 include the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Medical Literature Analysis and Retrieval System Online (MEDLINE), American Psychological Association (APA) PsycInfo and the Education Resources Information Centre (ERIC) via the EBSCO host platform. The Excerpta Medica database (EMBASE) will be searched via the OVID platform. The MEDLINE search strategy is included in Table 1.

Table 1

EBSCO MEDLINE search strategy


MeSH Headings & Search Terms


MH “students, nursing”


MH “Education, Nursing, baccalaureate”










1 OR 2 OR 3 OR 4 OR 5 OR 6


MH “Simulation training+”


Simulation N3 training


Simulation N3 education


Simulation N3 patient


8 OR 9 OR 10 OR 11




7 AND 12 AND 13

Primary quantitative studies included in systematic reviews captured by the search strategy and studies identified through secondary searching relevant study reference lists will also be eligible for inclusion. Specific search terms will be developed in Medline with assistance from a senior librarian experienced in the conduct of systematic reviews, using text words and subject headings. Primary search terms include ‘nursing students’, ‘simulation training, ‘patient simulation’, and ‘immersive simulation’ with common Boolean operators as illustrated in Table 2. Each database will be searched using these broad terms with either MeSH or Emtree terms with appropriate permutations.

Table 2

Search concepts

Concept 1

Concept 2

Concept 3

MH “students, nursing”

MH “Simulation training+”





MH “Education, Nursing, baccalaureate”

Simulation N3 training






Simulation N3 education






Simulation N3 patient









NB: Each concept will be combined with “AND”

Data Management, Selection & Screening

Database search results will be imported into reference management software Endnote© VersionX9 and uploaded into Covidence. Covidence is a tool for effective collaborative title and abstract screening, full text screening, data abstraction and quality assessment [17]. Covidence automatically identifies, sorts, and removes duplicate studies. Two reviewers (MJ & RW) will independently screen title and abstract of citations with a third reviewer (LM) available to moderate disagreements with a view to reaching consensus. Articles that meet eligibility criteria will be sourced for full text review. Two reviewers (MJ & RW) will complete full text screening with a third (LM) available to moderate disagreements to achieve consensus. Screening outcomes will be documented and reported in a PRISMA flow diagram [11].

Data Extraction

Full text data extraction will be undertaken in duplicate by two reviewers (MJ & LB) and conflicts resolved with arbitrary discussion. Data extraction will take place using a modified version of the existing Covidence data extraction template. Study characteristics to be recorded include publication details (authors, publication year), demographic characteristics, methodology, intervention and comparator group details and reported outcomes. To ensure consistency between reviewers, periodic meetings will be concurrently held during the screening process to resolve disagreements by discussion. A third reviewer (LM) will act as adjudicator in the event of an unresolved agreement.

Data items

The following data items will be extracted from selected studies: study setting; sample; inclusion and exclusion criteria; aim; design element/s employed; unit of allocation; start and end dates of the study; duration of participation; baseline group differences; frequency and duration of the intervention; outcome measured (eg, knowledge acquisition; skill improvement, attitude and satisfaction); tool used to measure outcome; validity of the tool/s; comparator group education method; sustainability of outcome/s.

Outcomes & Prioritization

The primary outcome is SBE effect measured by assessing at least one measure of knowledge, skills or attitude as an endpoint. Outcomes will be compared between groups who do and do not participate in a SBE intervention. Secondary outcomes include describing variability in SBE design elements and sub-group analyses to explore the interplay between SBE elements and learning outcomes. In the case of discrepancies in outcomes reported, contact with corresponding authors will be attempted by email to obtain relevant data.

Risk of bias in individual studies

Each randomized trial will be assessed for possible risk of bias using the revised Cochrane Collaboration tool for assessing risk of bias (RoB 2). This tool focuses on trial design, conduct, and reporting to obtain the information relevant to risk of bias [18]. Based on the answers provided within the tool, trials will be categorised as ‘low’ or ‘high’ risk of bias. If there is disagreement, a third author will be consulted to act as an arbitrator. If a study has a high risk of bias, it will be excluded from the review analyses.

The Risk Of Bias In Non-Randomized Studies – of Interventions (ROBINS-I) tool will be sued for bias assessment in studies with this design. The ROBINS-I tool shares many features with the RoB 2 tool as it focuses on a specific result, is structured into a fixed set of domains of bias, includes signalling questions that inform risk of bias judgements and leads to an overall risk-of-bias judgement [19].

Quality Appraisal

Two authors (MJ & LB) will independently assess each article for quality using the Mixed Methods Appraisal Tool (MMAT) [20]. This critical appraisal tool allows the appraisal of five categories of studies including qualitative research, randomised controlled trials, non-randomised studies, quantitative descriptive studies, and mixed methods studies. As recommended, each study will be reviewed by two authors by completing the appropriate study categories identified within the MMAT. An overall score will not be used to rate the quality of study, instead a sensitivity analysis will be completed to consider the quality of studies by contrasting their results as recommended. Low quality papers will be excluded from review analyses.

Data Synthesis

The primary unit of analysis will reflect each endpoint measure; knowledge acquisition, skill improvement or attitude. Dichotomous outcomes will be extracted and analysed using odds ratio (OR) with 95% confidence interval (CI) and continuous outcomes will be represented using mean difference (MD), or standardized mean difference (SMD) when outcomes are measured using different scales, with 95% CI. In the event of missing data, we will attempt to contact the primary authors to obtain relevant information. Meta-analysis will be undertaken if two or more studies with comparable design and outcome measures meet eligibility criteria and have low risk of bias as defined by the Cochrane Effective Practice and Organisation of Care (EPOC) criteria [21]. Pooled data will be analysed using the DerSimonian and Laird method for random-effects models in RevMan [22]. This model is used when reported outcome effects differ amongst studies but follow some similar distribution [23]. Findings from meta-analysis will be illustrated using forest plots.

The I2 test of heterogeneity will be used to determine the level of variation related to diversity rather than chance. Rather than simply stating whether heterogeneity is present or not, it will be measured using I2 test [24]. Low heterogeneity will be reflected as an I2 result between 0–40%; moderate heterogeneity between 30–60%; substantial heterogeneity between 50–90%; and considerable heterogeneity between 75–100% [25]. If a high heterogeneity level (I2 > 50%) among trials exist, study design and characteristics will be reported, and sensitivity analyses conducted to reduce variability with a view to being able to undertake meta-analysis. It is assumed that specific design elements will underpin the need for subgroup analyses. If data are not suitable for meta-analysis findings will be presented descriptively in the form of a narrative synthesis according to categories outlined in the SWiM guideline [26]. Due to the potential of a large amount of data that could be conveyed textually, content will be sequenced to follow the same structure to ensure consistency across results [27].


Study selection for this review will be guided by the PICO pneumonic and the framework outlined in this protocol to reduce risk of bias. Hand searching relevant systematic reviews will contribute to a reduction in publication bias [28]. To reduce bias associated with selecting studies, two independent reviewers will be used throughout the screening and data collection process [29]. To address bias when synthesising studies, this protocol has been registered prospectively to promote transparency and replicability [10]. Two reviewers will appraise the quality of studies and low-quality studies will be excluded to avoid inappropriate influence on results [10].

Confidence in cumulative evidence

The overall effectiveness of simulation and the impact of design elements on undergraduate nurses will be assessed with the Grades of Recommendation, Assessment, Development and Evaluation Working Group (GRADE) approach. The GRADE approach allows an assessment of the quality of the body of evidence for each individual outcome. Quality of evidence is dependent on within-study risk of bias, directness of evidence, heterogeneity, precision of effect estimates and risk of publication bias [23]. Two authors will independently assess the quality of evidence regarding the effectiveness of simulation on each outcome. All disagreements will be resolved through discussion and consensus. Quality of evidence will be categorised as high, moderate, low and very low using EPOC [21] definitions.


SBE is seen as a panacea to counteract the potential for inadequate exposure to real life clinical scenarios. It has become an increasingly popular mode of teaching yet reports of its effectiveness remain diverse. There are variety of simulation designs and embedded elements have implications for learning outcomes that are not well described or scrutinised in the literature. This systematic review will provide a detailed summary of evidence for SBE and SBE design elements in the context of undergraduate nursing education. Findings will contribute to the body of literature examining specific design elements by establishing the effect of SBE on learning outcomes and demonstrating the degree of variability in these elements.


SBE: Simulation-based Education; CINAHL: Cumulative Index to Nursing and Allied Health Literature; MEDLINE: Medical Literature Analysis and Retrieval System Online; APA: American Psychological Association; ERIC: Education Resources Information Centre; EMBASE: Excerpta Medica database; MMAT: Mixed Methods Appraisal Tool; PROSPERO: Prospective Register of Systematic Reviews; PICO: Population, Intervention, Comparison, Outcome.


Authors Contributions

MJ, LM, LB and RW conceived the idea and structured the aim, objectives and PICO elements, and reviewed search string and terms. MJ drafted the manuscript. MJ, LM, LB and RW revised the manuscript, read and approved the final draft.


No funding has been received for the conduct of this review.

Availability of Data and Materials

Datasets generated by searches will be made available upon reasonable request.

Ethics Approval and Consent to Participate

This review does not require ethics approval and as there are no active participants consent is not relevant.


Not applicable.


  1. Kim J, Park JH, Shin S. (2016). Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Ed. 2016;16:152.
  2. Nielsen K, Norlyk A, Henriksen J. Nursing students’ learning experiences in clinical placements or simulation–A qualitative study. J Nurs Ed Prac. 2019;9(1):32-43.
  3. Al-Ghareeb AZ, Cooper SJ. Barriers and enablers to the use of high-fidelity patient simulation manikins in nurse education: an integrative review. Nurs Ed Today. 2016;36:281-286.
  4. Hayden JK, Smiley RA, Alexander M, Kardong-Edgren S, Jeffries PR. The NCSBN national simulation study: A longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. J Nurs Reg. 2014;5 Suppl 2:S3-S40.
  5. Aebersold M. Simulation-based learning: No longer a novelty in undergraduate education. Online J Issues in Nurs. 2018;23(2):1-13.
  6. Kinio AE, Dufresne L, Brandys T, Jetty P. Break out of the Classroom: The Use of Escape Rooms as an Alternative Teaching Strategy in Surgical Education. J Surg Ed. 2019;76(1):134-139.
  7. Cant RP, Cooper SJ. Use of simulation-based learning in undergraduate nurse education: An umbrella systematic review. Nurs Ed Today. 2017;49:63-71.
  8. Bucknall TK, Forbes H, Phillips NM, Hewitt NA, Cooper S, Bogossian F, et al. An analysis of nursing students' decision-making in teams during simulations of acute patient deterioration. J Adv Nurs. 2016;72(10):2482-2494.
  9. Raurell-Torreda M, Llaurado-Serra M, Lamoglia-Puig M, Rifa-Ros R, Diaz-Agea JL, García-Mayor S, Romero-Collado A. Standardized language systems for the design of high-fidelity simulation scenarios: A Delphi study. Nurs Ed Today. 2020;
  10. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Reviews. 2015;4(1):1-9.
  11. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffman TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systeamtic reviews. BMJ. 2020;372(n71).
  12. Richardson WS. The well built clinical question: a key to evidence-based decisions. ACP J Club. 1995;123(3):A12-13.
  13. Alconero-Camarero AR, Sarabia-Cobo CM, Catalán-Piris M, González-Gómez S, González-López JR. Nursing Students’ Satisfaction: A Comparison between Medium-and High-Fidelity Simulation Training. Int J Environ Res Pub Health. 2021;18(2):804.
  14. Massoth C, Röder H, Ohlenburg H, Hessler M, Zarbock A, Pöpping DM, Wenk M. High-fidelity is not superior to low-fidelity simulation but leads to overconfidence in medical students. BMC Med Ed. 2019;19(1):1-8.
  15. Gentry SV, Gauthier A, Ehrstrom BLE, Wortley D, Lilienthal A, Car LT, et al. Serious gaming and gamification education in health professions: systematic review. J Med Internet Res. 2019;21(3): e12994.
  16. Rourke S. How does virtual reality simulation compare to simulated practice in the acquisition of clinical psychomotor skills for pre-registration student nurses? A systematic review. Int J Nurs Stud. 2020;102:103466.
  17. Babineau J. Product review: covidence (systematic review software). J Canad Health Lib Assoc. 2014;35(2):68-71.
  18. Sterne J A, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366.
  19. Sterne JA, Hernán MA, McAleenan A, Reeves BC, Higgins JP. Assessing risk of bias in a non‐randomized study. Cochrane handbook for systematic reviews of interventions. 2019;621-641.
  20. Hong QN, Fabregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon MP et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for infromation professionals and researchers. Ed Inform. 2019;34(4):285-291.
  21. Cochrane Effective Practice and Organisation of Care (EPOC). EPOC worksheets for preparing a Summary of Findings (SoF) table using GRADE. (2017). Retrieved from
  22. Review Manager (RevMan) [Computer program]. Version 5.3. Copenhagen: The Nordic Cochrane Centre. The Cochrane Collaboration, 2014.
  23. Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA. Cochrane handbook for systematic reviews of interventions. 2019. John Wiley & Sons.
  24. Lin L. Comparison of four heterogeneity measures for meta‐analysis. J Eval Clin Prac. 2020;26(1):376-384.
  25. Chandler J, Cumpston M, Li T, Page M, Welch V. Cochrane handbook for systematic reviews of interventions. 2019. Hoboken: Wiley.
  26. Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020;368.
  27. Munn Z, Tufanaru C, Aromataris E. JBI's systematic reviews: data extraction and synthesis. Am J Nurs. 2014;114(7):49-54.
  28. Paez A. Gray literature: An important resource in systematic reviews. J Evid‐Based Med. 2017;10(3):233-240.
  29. Siddaway AP, Wood AM, Hedges LV. How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. An Rev Psych. 2019;70:747-770.