This study was performed from May 2021 -September 2021 at the University of Maryland Medical Center. Our pilot sample population was recruited from 4th year medical students at the University of Maryland School of Medicine and Johns Hopkins University School of Medicine. Inclusion criteria were participants aged 18 to 88 years old and medical students who have not completed the ASSET course. Exclusion criteria were participants 89 years or older as well as medical students who completed the ASSET course. 4th year medical students were chosen because we wanted to evaluate participants who are familiar with how to operate (i.e., the basic manual surgical skills), but not yet familiar with the specific procedures of a fasciotomy or other advanced trauma training received by residents. Our control group was a historical cohort of residents who completed pre-ASSET and post-ASSET evaluations in fasciotomy at the University of Maryland School of Medicine as part of a prior experiment [8].
The first session took place in an office environment (Fig. 1A), where participants undertook the VR-SET program using a head-mounted virtual reality display (Fig. 1B, HMD, Rift S, Oculus, Dallas, TX). The VR training module consists of step-by-step demonstration segments of the lower extremity fasciotomy using a ‘surgeon perspective’ VR positioning in a 180-degree view. The team created a customized Unity VR application to display high-fidelity content acquired through a custom-built GoPro camera rig, and made alignments to address stereo separation, rotation, and camera distortion, ensuring that the visuals appeared natural to the viewer (Fig. 1C). This was achieved by instantiating two half-sphere projection screens to match the field of view, lens distortion, position, and rotation of the GoPro cameras used for data acquisition, one for each eye separately. Subsequently, placing these screens at horizontal separation matching the stereo separation of the human eye ensured that visual content was displayed identically to what the human eye would see naturally. The team also integrated advanced video decoding software to handle the dual 4K videos required to play back the medical procedure. To enhance the interactivity and intuitiveness of the training module, we added hand-tracking capabilities using a Leap Motion hand tracker (Fig. 1D). Instead of relying on standard VR controllers, users can select from menus and navigate through module sections using buttons projected onto their wrist (Fig. 1E) or gesture of their hands. We note that there is no commercially-available hardware or software that provides similar levels of fidelity and detail.
In the second session occurring 10–14 days after training, participants performed a lower extremity fasciotomy at the State Anatomy Board clinical training laboratories. During the procedure, participants were asked questions by the research team using a standardized script as an assessment of their anatomic and pathophysiology knowledge of compartment syndrome [9]. After the standardized questions are completed, the participant is asked to complete the lower extremity fasciotomy. As the participant performs the procedure, two trained evaluators utilize a validated question and observation script called the Individual Procedure Score (IPS) to record the participants' performance (Appendix 1). This scoring system has been validated as correlating with successful outcome of procedures using approximately 200 surgeons performing the procedure [9]. A high IPS score correlates with improved performance. At the conclusion of the assessment, the participant was debriefed on the procedure, but did not receive numerical results of their performance.
A two-sample t-test was performed to test if VR-SET training is different to cadaver-based training with significance set at α = 0.05 comparing successful compartment decompressions and errors committed. Numerical values for the five IPS component scores were compared using ANOVA with significance set at α = 0.05. All data is represented as mean plus/minus standard deviation.
Figure 1: A) Participants undertook VR-SET program training in office environment. B) The Oculus Rift Head Mounted display with the Leap Motion Hand Tracker. C) High-fidelity content acquired through a custom-built GoPro camera rig ensure that the visuals appear natural to the viewer. D) Leap Motion hand control places users hands in the virtual environment. E) Leap allows study subjects to easily select menu items and navigate through multiple segments of the training module.