AnReal: an Android-based Extended Reality System
Main Features and Interaction Design
The proposed system, which we call AnReal, is an Android mobile application placed in a simple VR headset (approximately 32 USD). The application allows users to conduct one classic rehabilitation exercise, but the same concept can be applied to any other exercise that requires users to move their body in a certain expected way. This exercise is based on bending forward, which is one of the most common problems in patients with chronic low back pain (Osumi et al. 2019).
The system has two modes: camera preview and trajectory simulation. In the camera preview mode, the user observes the real environment, in its current state, through the headset. In the trajectory simulation mode, the trajectory is simulated by means of a pre-recorded video that simulates the expected trajectory of the user while carrying out the rehabilitation exercise (Fig. 1). The video is recorded on the same phone, but a WiFi connection is established with a computer so that the video is processed on the computer and can be accessed instantly through the phone. The system automatically switches between the camera preview mode and the trajectory simulation mode at a predetermined angle of movement, so that the user stops moving but continues to see as if he/she were still doing so, creating an illusion of movement.
In this situation, healthy participants are asked to stop at a certain angle to test the functionality of the trajectory continuation. In a later study, this transition will be automatic once the person with low back pain can no longer bend down.
Therefore, the participant interacts with the system by carrying out the rehabilitation exercise while wearing the headset, and the headset displays either the real environment through the camera lens, or a video of a simulated trajectory that continues the user’s movement. The idea behind this interaction is that, in a future study, a patient with pain and kinesiophobia would start to lose their fear of movement by watching themselves carry out movement trajectories beyond their current capabilities, as limited by their condition. In other words, utilizing a visual-vestibular mismatch, the aim is to make the person believe that they are finishing the expected movement completely without pain, therefore decreasing the fear of movement.
System Development
The Android mobile application was developed using a Samsung S21 FE phone. Figure 2 illustrates the software architecture, which can be segmented into three layers: (1) the application framework layer, (2) the runtime and libraries layer, and (3) the kernel layer (Yan et al. 2016). The application was developed using the React Native (RN) framework and a native module in Android Studio. Each component requires different libraries and abstraction levels to interact with the hardware abstraction layer and thus communicate with various sensors.
The native Java module establishes communication with the device's rear camera stream through the Camera2 API. The CaptureRequestBuilder class within this API allows duplicating the stream to segments of the screen, thus being able to segment the screen into 2 identical streams at a 30 Hz refresh. The accelerometer, gyroscope and magnetometer sensors were used to know the dynamic inclination of the headset through the SensorManager API on the native side. The RN framework displays the previously recorded video employing the react-native-video library and the expo-sensors library for the dynamic inclination. The Javascript code communicates with the native modules through the React Native Bridge, which allows access to features that are not available in Javascript (Native Modules, 2023).
The intention is to alternate, in a smooth and non-perceptible way, the transition of the view between the modules once a previously determined angle is reached. The cycle of each repetition of the exercise begins with the camera preview from the Android native module. Each time the person bends forward, there is a probability that the trajectory is simulated (i.e. is switched to the RN module showing the video), thus ending the activity in the native module. If so, the video is played until it reaches the same point from where it started and returns to the native module completing the cycle. The functions in charge of transitioning between modules sample the person's tilt at a frequency at least twice the refresh rate of the headset.
The goal is that once the person reaches this mark, the application simulates the trajectory artificially, creating the illusion that continue to lean until he/she see their own feet, even though they remain static.
Study Design
We evaluated the AnReal prototype in a laboratory setting in a classroom at the kinesiology department of Pontificia Universidad Católica de Chile, with artificial light and no moving elements in the field of view (e.g. people walking by). We defined a spot where the subjects would perform the exercise and marked a line on the wall indicating how far to bend down. This section describes the evaluation of the system.
Recruitment
This study used a convenience sample of healthy kinesiology students at Pontificia Universidad Católica de Chile in Santiago, Chile, through an open invitation sent through email. Therefore, the exclusion criteria was that they did not have any pathology that could be affected by the use of a headset, as well as no low back pain. This study was approved by the University Ethics Committee.
Measurements
We used four questionnaires to measure user experience, cybersickness, sense of presence, and immersive tendencies. All of the questionnaires were translated into Spanish by the researchers, except for the User Experience Questionnaire (UEQ), in which we used the official version in Spanish.
To capture the user experience, we used the Spanish version of the UEQ (Somrak et al. 2019). This consists of 26 items on a 7-point scale, which are summarized into six categories: attractiveness, perspicuity, efficiency, dependability, stimulation and novelty.
To measure cybersickness, we used the most commonly used questionnaire (Weech et al. 2019), the Simulator Sickness Questionnaire (SSQ) (Kennedy 1993). This questionnaire has 16 items on a 4-point scale. These elements can be summarized in subscales such as oculomotor discomfort, disorientation, and nausea. The total score represents the overall severity of the CS symptoms
To measure sense of presence, the Presence Questionnaire (Witmer et al. 1998) was used. This is the most widely used questionnaire in this category (Rhiu et al. 2020). We removed the components measuring sound and haptics, resulting in a reduced version of 18 items on a 7-point scale. From these 18 items, several subscales (e.g. realism, possibility to act, interface quality, possibility to examine and self-evaluation performance) can be obtained.
We also used the Immersive Tendencies Questionnaire (Witmer et al. 1998) to index an individual's likelihood of feeling immersed in virtual settings. It consists of 18 items on a 7-point scale. The subscales show the level of involvement, attentional focus and the tendency to play video games.
Finally, we asked two open questions: (1) “How did you feel wearing the headset?”, (2), “Did you feel you could move more than you actually did?”. After these questions, we explained the goal of the study to the participants, and then asked two final open questions: (3) “Do you think the goal was achieved?”, (4) “Do you have recommendations for improvements in the system?”.
Procedure
First, we recorded a video that simulated the trajectory and processed it so it would be ready for the participant. Then, the participant was asked to fill out the informed consent, basic demographic and activity level information and the Immersive Tendencies Questionnaire. We taught the participant the exercise they would be doing throughout the evaluation: they had to lean forward using their spine, and stop when they reached the mark on the wall (positioned so that the person would reach roughly 30 degrees of inclination), keeping their feet stretched and the head aligned with the trunk of the body (Fig. 1).
We then asked the participant to wear the headset so they could get used to seeing around them with the headset on. The participant then performed three sets of 10 repetitions of the aforementioned exercise. During the first set, the participant saw through the headset with the camera preview mode (real environment). The second set continued the participant’s trajectory through the video trajectory simulation at some of the repetitions according to a probabilistic function; as the participant went through the repetitions, there was a higher probability that the video will appear, thus achieving a surprise factor. The surprise factor is that people expect their vision to match their body position. But unexpectedly, their visual path continues on its own. This is a key attribute of the system because it helps to test the cybersickness parameters and the illusion of movement in healthy individuals. Additionally, in the future, the aim is to make the person believe that they are finishing the expected movement completely without pain, therefore decreasing the fear of movement.
Finally, in the third set, the trajectory was simulated in all repetitions. After the participants finished all of the exercises, they were asked to fill the remaining questionnaires, namely UEQ, SSQ and Presence Questionnaire. Finally, each participant was asked the four final open questions. This brief interview was audio recorded.