By developing the three-dimensional (3D) technologies, the demand for high-quality images keeps growing, resulting in the development of various 3D visualization approaches in both head-mounted and front-view displays (Geng, 2013; Naderi., et al 2020). In the field of human-computer interaction, the crucial question is whether the depth effect produced by the new method brings benefits to users to perceive spatial relationships among displayed objects. Answering this question depends on the physical properties of the generated visual stimuli related to the display technology and the specifics of human visual perception (Jameson, 2012). Therefore, assessing the ergonomics of three-dimensional visualization systems has become essential in terms of depth perception and deployment of visual attention (Huynh-Thu et al., 2011; Poulakos et al., 2014). Because visual search in 3D visualization depends on depth perception (O'Toole & Walker, 1997; Finlayson et al., 2013), the depth cues have an essential role in depth judgment, and applying more depth cues results in more direct depth perception (Hoffman et al., 2010; Reichelt et al., 2010).
Depth cues are sources of information on the weight of changes depending on the viewing condition (Howard & Rogers, 2012). At close viewing distances, the relative binocular disparity is considered a prerequisite for accurate depth perception (Howard & Rogers, 2012; Rogers, 2019). It may contribute to image saliency and deployment of attentional resources across depth planes (O'Toole & Walker, 1997; Finlayson et al., 2013; Plewan & Rinkenauer, 2019). According to classical visual search models (Treisman & Gelade, 1980; Wolfe, 2007), there are early and late processes with the active involvement of attention. It has been shown both in neurophysiological and behavioral studies that the information about binocular disparity is available early in visual processing and manifests in later, higher-order representations. Namely, considerable differences were revealed in the amplitudes of event-related potentials (ERPs) in parietal and occipital regions at 90–130 ms after the onset of a visual stimulus when comparing brain activity during the viewing of stereoscopic and two-dimensional images (Backus et al., 2001; Skrandies, 2001; Rutschmann & Greenlee, 2004; Fischmeister & Bauer, 2006; Avarvand et al., 2017; Marini et al., 2019) indicating the critical role of these brain areas in the processing of information about the depth. Some studies reported slightly earlier (Oliver et al., 2018) and later manifestations (Akay & Celebi, 2009; Pegna et al., 2018) of brain reactions to binocular disparity.
In addition to neural indicators of early sensitivity to depth, highlighted differences in the later cognitive processes (Finlayson et al., 2013; Liu et al., 2013; Roberts et al., 2015; Pegna et al., 2018) showing that amplitudes correlated with the absolute values of binocular disparities (Liu et al., 2013) and orienting of attention (Van den Berg et al., 2016) at 150–200 ms. In later times, the stereoscopic input modulated high-level perceptual processes involved in the integration of information (Kasai & Morotomi, 2001; Liu et al., 2013; Roberts et al., 2015; Van den Berg et al., 2016), figure-ground segmentation (Finlayson et al., 2013; Roberts et al., 2015; Pegna et al., 2018), view generalization (Oliver et al., 2018), and recognition (Avarvand et al., 2017). These processes were primarily associated with neural activity in the parietal region rather than the occipital region.
Most of the research centered on clarifying the neural activity correlates to the binocular disparity processing that was performed using stereoscopic images ranging from anaglyph-based (Kasai & Morotomi, 2001; Oliver et al., 2018; Pegna et al., 2018) to polarization-based (Frey et al., 2016; Avarvand et al., 2017), however, to the best of our knowledge, there are no studies about the actual depth perception and the direct brain-behavior of actual depth perception. Stereoscopy creates the illusion of image depth by separating visual inputs for both eyes, thus possibly causing binocular vision stress due to the accommodation-vergence conflict that can induce discomfort and visual fatigue (Hoffman et al., 2010; Kim & Lee, 2011; Chen, 2012; Chen et al., 2013; Malik et al., 2015; Frey et al., 2016). On the other hand, actual depth is free of the mentioned issues since no conflict between accommodation and vergence is present. Electroencephalography (EEG) is a reliable method because of its high temporal resolution in order to measure the interaction of human ergonomic properties with physical factors, especially in the field of visual system evaluation (Murata et al., 2005; Kim & Lee, 2011; Chen, 2012; Chen et al., 2013; Malik et al., 2015; Frey et al., 2016). Compared to behavioral measures, the neurophysiological one is less biased (Frey et al., 2016).
New approaches are being developed to avoid the forceful separation of views for both eyes to improve user comfort and performance. The new displays aim to provide three-dimensional images on multiple focal planes, which requires no eyewear (Hoffman et al., 2010; Geng, 2013; Bader et al., 2016; Osmanis et al., 2018; Zhan et al., 2020). Specifically, image points in the physical space of optical elements can be shown in a time-multiplexed manner (Geng, 2013; Smalley et al., 2018; Osmanis et al., 2018).
Depth perception plays an essential role in visual search when finding objects in space or information in spatial images. It is crucial both in our daily lives and in performing professional tasks that involve qualitative or quantitative judgments about the third dimension of images. As new three-dimensional visualization systems are intended to be used daily, it is crucial to understand how their implementation affects user performance. Previous behavioral studies provided some experimental support for using volumetric visualization instead of stereoscopic images by reporting more accurate judgments on spatial relationships (Grossman & Balakrishnan, 2006) and faster information recognition time (Bader et al., 2016). However, to our knowledge, no studies have yet assessed how neural activity changes in response to new three-dimensional visualization methods.
The present exploratory study aimed to assess the EEG features of an actual three-dimensional visual search. We hypothesized that it could cause less brain activity to find the closest visual element when presented in the form of 3D volumetric images compared to the state that no depth difference exists between elements. So far, EEG was broadly employed to study visual search (Roberts et al., 2015; Van den Berg et al., 2016) and depth perception (Backus et al., 2001; Fischmeister & Bauer, 2006; Li et al., 2017; Pegna et al., 2018) for stereoscopic visualization, however, the application of these findings is limited for the understanding of information processing for viewing 3D volumetric visualization. In the current EEG study, an experiment was performed to investigate the electrophysiological indicators of differences depending on the 3D and 2D volumetric multiplanar image perception.