Both studies used a pre-post design to explore changes in measures following participation in a sim-IPE session, conducted as part of routine teaching.
Participants, setting and educational context
Simulation sessions in Newcastle and Oxford were broadly similar, except where noted. Sessions took place in purpose-built facilities with high-fidelity patient simulators and genuine clinical equipment representing an acute bay in a ward setting. Each session was attended by up to nine medical students and up to six nursing students.
Medical students were in clinical placement blocks in their final year, and 3-4 months from starting work as doctors. However, while Newcastle medical students were still two months from their final examinations, Oxford medical students had completed finals and knew their results. Most nursing students were in their second year, although some were recruited from years 1 and 3. In Newcastle, participation in the simulation sessions was compulsory for medical students, and voluntary for nursing students. In Oxford the situation was the reverse. All participants were notified of the voluntary nature of the research in advance.
Each session comprised three acute care scenarios in which students could practice ‘ABCDE’ (airway, breathing, circulation, disability, exposure) assessment – examples included sepsis, anaphylaxis and acute abdominal pain. Students were not informed of the possible scenarios in advance. Scenarios were designed to reflect best practice in clinical simulation [23].
In each scenario, initial assessment was conducted by nursing students. They called medical students in the role of junior doctors, who then carried out their own assessment and began management before the patient deteriorated. This phase of the scenarios involved extensive communication between the medical and nursing students. In Newcastle, scenarios terminated when the medical students called a senior for help, while in Oxford scenarios could continue beyond this point into resuscitation, or even manikin ‘death’. Each scenario took 20-40 minutes to unfold, followed by a 30-40 minute debrief with teaching faculty.
In Newcastle, medical students entered the scenario in pairs, and in Oxford in threes. In Newcastle one student was designated as ‘lead’ in advance (that is, not manipulated as part of the research study), meaning they took responsibility for assessment and management of the patient and the decision to call for senior help, while in Oxford a lead was not nominated by faculty but could be agreed among students, or emerge during the scenario. The remainder of the groups observed the scenario remotely through a video link.A member of simulation faculty was also present in the simulation room, providing details of observations that were not available through the patient simulator (eg capillary refill time). In some sessions, a clinical educator was also present in the observation room providing commentary and facilitating discussion. Authors AP, MK and ND in Newcastle, and PG, ER and CM in Oxford were involved in the design and delivery of sessions.
Procedure
Following standard briefing from teaching faculty, a researcher introduced the study, and invited students to complete the pre-session questionnaire. The simulation session then proceeded as normal. Following the final scenario and debrief, the researcher asked all participants to complete the second questionnaire. The post-session questionnaire was administered at this point for logistical reasons so as not to intrude on the educational delivery of the session, but, as debriefing is an integral part of simulation-based education, this also provides ecological validity.
Questionnaire materials
Questionnaires were anonymous, with pre- and post-session forms linked using unique reference numbers. In addition to scale items described below, the pre-session questionnaire asked for participants’ age, gender and previous experience of simulation. The post-session questionnaire also asked which role students had taken in the session (lead, other participant or remote observer).
Attitudes towards interprofessional learning
Questionnaires in both studies used the 19-item RIPLS measure [8], with a five-point response scale from ‘strongly disagree’ to ‘strongly agree’. Bearing criticisms of RIPLS in mind [24], in analysis we used a uni-dimensional measurement based on an item response theory analysis [25] published since our data collection. This measure was derived from the mean of five items identified as the most informative in that analysis. These five ‘RIPLSCore’ items are (with their numbering and associated subscales from the original publication of RIPLS [8]):
Item 3. Shared learning with other health care students will increase my ability to understand clinical problems. (Teamwork and collaboration).
Item 4. Learning with health care students before qualification would improve relationships after qualification. (Teamwork and collaboration).
Item 8. Team-working skills are essential for all health care students to learn. (Teamwork and collaboration).
Item 11. It is not necessary for undergraduate health care students to learn together. (Professional identity, reverse-scored).
Item 15. Shared learning will help to clarify the nature of patient problems. (Professional identity).
All scale items are included in Appendix A.
Professional identity
In Study 1 we followed earlier work in medical education [13][14] by using a measure of identity derived from social identity theory, and extensively used in organisational settings [26]. This includes 10 items reflecting different dimensions of identification – awareness, evaluation and affect – but is treated as a single measure. We refer to this simply as Strength of identification. We also used a 4-item scale assessing the Importance of the group to the individual [27].
In Study 2 we sought further refinement of the identity measure by using a scale with three explicit subscales reflecting different dimensions of identity, again derived from social identity theory [28]. Centrality reflects a group’s ‘enduring psychological salience’ [28, p.253] for an individual, linked to their readiness to adopt an identity. It is analogous to the ‘importance’ scale in Study 1. Ingroup Affect reflects positive feelings associated with the group, while Ingroup Ties reflects the interpersonal experience of group membership and a sense of ‘belonging’. Both Ingroup Affect and Ingroup Ties have elements of the Strength scale in Study 1, although Cameron demonstrated that it was most statistically associated with Ingroup Affect [28]. Items in Study 2 relating to Professional Group were adapted to the future tense, eg ‘In general, the fact that I am going to be a doctor is an important part of my self-image’.
Study 1 considered participants’ identification only with their eventual professional group (ie doctor or nurse). Study 2 also considered student group (medical or nursing student) and the interprofessional team in the simulation scenario. In analysis we refer to these groups as the ‘Target’ of the identity measures.
Analysis
To evaluate internal consistency, Cronbach’s alpha was calculated for all scales. Sample sizes were too small to consider scale dimensionality, and so scale structures established in the literature were used.
Missing data
Respondent-mean substitution [29] was used to generate scale scores if just one item had been omitted. In Study 1, 10 missing values from 8 respondents, and in Study 2, 36 values from 25 respondents, were generated in this way. If more than one item was omitted, no scale score was calculated (this applied to two respondents in Study 1, and 25 in Study 2, many of whom did not complete all of the second questionnaire due to time constraints or printing error).
Regression modelling
The main analysis used linear mixed effects modelling, a form of linear regression suitable for repeated measures designs, which allows analysis of unbalanced datasets [30]. Analysis used the lme4 package in R [31][32].
RIPLSCore and the identity subscales were used as outcome variables in separate analyses. We used a criterion-based approach to model selection to identify whether hypothesised effects contributed to these scores. Starting with a model including all hypothesised effects, the contribution of each was tested using the drop1() function in lmer4 [31]. Final models retained only predictors whose removal would significantly reduce model fit.
An a priori comparison tested whether those Newcastle medical students designated as‘lead’, with a nominally more active role in the simulation, would exhibit greater changes in measures than other participants. No such effects were found, and so role was not included in models. Initial model building also found that previous experience of interprofessional simulation did not contribute to any models.
In all regression models, respondent was included as a random intercept to control for individual differences in responses, while other predictors were included as fixed effects. Factors included in Study 1 and retained as significant effects in at least one final model were:
- Pre-Post (to identify changes in measures following the simulation session).
- Participant Group (to identify differences between nursing and medical students).
Analyses for Study 2 included these and additional effects:
- Site (to identify differences between Oxford and Newcastle).
- Target of identity measure (to identify differences between identity measures referring to Professional Group, Student Group and Team).
Two- and three-way interactions were included in initial models to examine whether effects were consistent across levels of the other factors.
Follow-up analyses on final models used the emmeans package [33] to calculate and compare estimated marginal means (the means derived from the model, rather than the sample data). These are reported in place of regression coefficients to aid clarity of interpretation (coefficients are provided in Appendix B along with all estimated marginal means). All p-values for multiple comparisons were adjusted using the Tukey HSD method.