Participants
The sample was recruited through advertisements, social media, and email. The final sample included 83 community-dwelling adult volunteers (65.5% female) without a self-reported history of neurological or psychiatric conditions and use of medication (also, no self-reported history of substance abuse). The participants had Portuguese as their native language and their ages ranged from 18 to 54 years (M = 25.28; SD = 7.97). A total of 3.6% of the participants completed the 9th grade, 16,9% the 12th grade, 45.8% a bachelor's degree, 30.1% a master's degree, and 3.6% a PhD. All participants reported having normal or corrected-to-normal visual and no auditory deficits.
Materials
Self-report measures
Participants filled out self-report measures on the day of data collection using the Qualtrics online platform (Qualtrics, Provo, UT). Two measures steaming from different perspectives of psychopathy were used. The TriPM is a 58-item scale ranging from 0 – false to 3- true (Patrick; 2010; adapted version by Paiva et al. 2020). TriPM measures the phenotypic expressions described by the Triarchic Model of Psychopathy, namely boldness (a = .85), meanness (a = .89), and disinhibition (a = .81). The SRP:Short-Form includes 29 items ranging from 1 “strongly disagree” to 5 “strongly agree” (Paulhus et al., 2016; adapted version by Seara-Cardoso et al. 2020) and splits the two main PCL-based factors into facet 1 – interpersonal traits (a = .83), facet 2 – affective traits (a = .76), facet 3 – impulsive behavior (a = .69), facet 4 – antisocial behavior (a = .53).
VR task
The experimental manipulations (emotion and eye direction) were displayed on a VR system, incorporated with an eye-tracking sensor. The VR task consisted of a brief passive interaction between the participant and a non-player character (i.e., avatar) of the same sex. The scenario resembled a typical coffee-shop, with elements like chairs, tables, windows, shelves (with books and other room decoration), lamps, plants, customers, a waitress, and a bar stand. Participants were sitting on a bench, in front of a table with some dishware and, on the other side, a chair for the avatar. The participants’ seat was adjusted to the height and distance of the avatar, as well as to the table in front of them to make the experience as realistic as possible. The sounds presented through the headphones were consistent with the presented environment (e.g., people chatting and traffic sounds on the street).
The task included eight trials/passive interactions, randomly presented, each lasting 90 seconds. Following a between-trial design, two variables were manipulated: the avatar's eye contact direction (20% vs 80% of the time directing the avatar’s eye gaze to the participant) and the avatar’s facial expression (happiness, fear, sadness, neutral). The direction of eye gaze, comprising both direct and averted gaze, allows for modulating task difficulty. The inclusion of sadness, a negative valence emotion, also allowed us to better assess whether the effect is fear-specific. Happiness, having positive valence, were included to discriminate the valence effect. Neutral facial expressions were the control condition. The FEE designed for this task were adapted from Oliver and Alcover (2020) and are represented in Fig. 1. In each trial, the avatar was expressing one of the four emotions mentioned above, and the intensity of the emotion was randomly switched between low and high during the 90 seconds to prevent the stimuli to be perceived as static and to increase the plausibility of the scenario. Avatar’s eye contact varied in the following order: 20% condition - No Eye Contact Random [20–50 seconds], Eye Contact [20 seconds], No Eye Contact [remaining time]; 80% condition - Eye Contact Random [20–50 seconds], No eye Eye Contact [20 seconds], Eye Contact [remaining time].
At the end of each trial (Fig. 2) participants answered the following questions: (1) emotion recognition - “What emotion did the person express?” (multiple-choice question with the following options: “fear”, “happiness”, “sadness”, “anger”, “neutral”, “disgust”, “surprise”, “euphoria”, “pride”, “pain”, and “embarrassment”); (2) emotion arousal - “What was the person's level of arousal?” (ratings from “1 - very low” to “7 - very high”); (3) valence - “How positive or negative was the person's emotion?” (ratings from “1 – very negative” to “7 – very positive”); (4) direct gaze time estimation - “What percentage of time was the person looking at you?” (multiple-choice question with the following options: “10%”, “20%”, “30%”, “40%”, “50%”, “60%”, “70%”, “80%”, “90%”, “100%”); (5) comfort - “How comfortable were you in the previous scenario?” (ratings ranging from “1 – very uncomfortable” to “7 – very comfortable”; (6) emotion direction - “Did you feel the emotion was directed at you?” (ratings ranging from “1 - not at all” to “7 – a lot”) and (7) scenario time estimation - “How long do you estimate the previous scenario lasted? (in seconds)” (multiple-choice question with the following options “71–80”, “81–90”, “91–100”, “101–110”, “111–120”, “121–130”). Questions were displayed and answered inside the VR scenario by moving eye-gaze to select the response.
Procedures and Data Collection
Upon arrival at the Laboratory, participants were told that the main goal of the study was the examination of VR scenarios for research. After signing the written informed consent, participants filled out the online questionnaire with the sociodemographic data and self-report measures. Then, participants performed the VR task while their oculomotor/eye activity was being tracked and collected through the eye-tracking sensor embodied in the VR headset.
This VR system is composed of software for VR delivery (Unity) and a headset (VR HTC Vive system with double AMOLED 3.6" diagonal lens, 1080*1200 pixels, 90 Hz refresh rate, 110 degrees visual field). The data collection and the VR scenario ran on a Dell computer, with Intel(R) Core(TM) i7-7700 Processor with 3.60 GHz, and a graphic card NVIDIA GeForce RTX 3060. The task was initiated with the Viveport application. The eye-tracking data was collected using the VIVE Pro-Eye System incorporated into the VR headset. Data collection took place in a room with VR sensors placed 1.80 meters from the ground, creating a VR area of 11.5m2.
Before initiating the task, the headset was placed and adjusted for each participant such that their eyes were in the center of the screen, to avoid blurring of the presented stimuli. A table was placed in the middle of the VR area to simulate the coffee table presented in the VR task. After placing the headset, the experimenter ensured that participants were comfortable and perceiving a well-focused display. The eye tracker was calibrated to each participant before initiating the task, using a 5-point calibration screen. The headset’s audio output was also adjusted for each participant. Data were collected with a 45 Hz sampling rate.
It was explained to the participants that during the experiment they would find themselves in a virtual environment resembling a coffee shop and that there would be a person in front of them who could be expressing different emotions. Participants were told to observe the scenario presented and answer a few questions at the end of each trial.
After the instructions, participants were given a 1-minute period of habituation to the scenario without the avatar and a 1-minute period of habituation to the scenario with the avatar present, which allowed participants to become familiar with their environment and surroundings. Participants were free to visually explore the scenario in a 360º range. After checking that the eye-tracking was functioning correctly and ensuring the comfort and well-being of the participants, the experiment began.
At the end of the task, participants received a 10€ gift card. The total data collection session lasted approximately 1 hour. All procedures were approved by the local ethics committee (Refª 2019/01–2).
Eye-tracking - Data analyses
At each time point, participants' eye gaze was classified according to the avatar element they were looking at (eyes, mouth, face, nose, or body). Then, the total time looking at each part of the body and face (dwell time) was computed for each experimental manipulation. All computations were performed using MATLAB.
Analytical strategy
Independent Repeated Measure ANOVAs with eye contact (20%, 80%) and emotion (Happiness, Fear, Sadness, Neutral) as within-subjects factors, and emotion recognition (accuracy, arousal, valence, comfort, direction, directed gaze time estimation, and scenario time estimation) as dependent variables, were performed to check if the task returned the expected effects and these measures were successfully manipulated.
In order to test our hypotheses, independent Multivariate Regression models were performed with psychopathic traits as predictors of behavioral and eye-tracking responses. Multivariate methods are a reliable statistical tool to estimate effects once the intercorrelations between variables are controlled; in this case, the covariation between eye contact conditions within the same indicator. The first set of models included psychopathy traits (TriPM or SRP-SF) data as predictors of behavioral data, in independent models (e.g., emotion accuracy in both 20% and 80% eye-contact conditions; emotion arousal in both 20% and 80% eye-contact conditions; emotion valence in both 20% and 80% eye-contact conditions; etc.). The second set of models included psychopathy traits (TriPM or SRP-SF) as predictors of eye-tracking data in separate models (dwell time fixation in body, mouth, nose, face, and eyes, for both 20% and 80% conditions). The alpha threshold for statistical significance was set at .01 to correct for multiple comparisons. Based on Cohen’s (1988) recommendations, we interpreted a coefficient of determination of .01 as a small effect, .09 as a medium effect, and .25 as a large effect.