Mugs and Plants: Object Semantic Knowledge Alters Perceptual Processing With Behavioral Ramifications

Neural processing of objects with action associations recruits dorsal visual regions more than the neural processing of objects without such associations. We hypothesized that because the dorsal and ventral visual pathways have differing proportions of magno- and parvocellular input, there should be behavioral differences in perceptual tasks between manipulable and nonmanipulable objects. This hypothesis was tested in college-age adults across five experiments (Ns = 26, 26, 30, 25, and 25) using a gap-detection task, suited to the spatial resolution of parvocellular processing, and an object-flicker-discrimination task, suited to the temporal resolution of magnocellular processing. Directly predicted from the cellular composition of each pathway, a strong nonmanipulable-object advantage was observed in gap detection, and a small manipulable-object advantage was observed in flicker discrimination. Additionally, these effects were modulated by reducing object recognition through inversion and by suppressing magnocellular processing using red light. These results establish perceptual differences between objects dependent on semantic knowledge.

manipulation. Although both pathways contribute to visual perception, the demands of perceiving a specific object can differentially engage the two pathways such that the dorsal pathway provides greater contributions for visual processing of objects that are directly relevant for manipulation (e.g., hammer, plunger, or mug). Evidence for dynamic and biased recruitment of the visual pathways has been garnered from a wide range of techniques and paradigms. For example, when participants are presented with objects of various categories, including tools, places, animals, and faces, an increase in dorsal-pathway processing, specifically in the left ventral premotor and posterior parietal cortices, has been observed exclusively in response to tool presentation (Chao & Martin, 2000). Similar results evidencing a dorsal bias for manipulable objects are observed in functional MRI studies (Almeida et al., 2013;Chen et al., 2018;Mahon et al., 2007;Noppeney et al., 2006) and with various other paradigms, including interocular suppression (Fang & He, 2005) and continuous flash suppression (Almeida et al., 2008(Almeida et al., , 2010; but see Almeida et al., 2014;Sakuraba et al., 2012).
Whereas neurophysiological evidence has been convincing in showing selectivity for objects across the two streams, the impact of differential processing across the two pathways on object perception and subsequent behavior has not been characterized. Hypotheses regarding how dynamic recruitment of the visual pathways influences perception and behavior are based on an asymmetry in the cellular innervation of the two pathways that overlaps with the separate magnocellular and parvocellular channels identified in the anatomical architecture of visual processing (Baizer et al., 1991;Ferrera et al., 1992;Maunsell et al., 1990). The asymmetry in input endows each pathway with different response properties in accordance with cell stimulus preferences within the magno-and parvocellular channels. These channels originate from different types of ganglion cells within the retina and course separately through different layers of the lateral geniculate nucleus (Leventhal et al., 1981;Perry et al., 1984) to innervate separate layers of V1 (Blasdel & Lund, 1983). From V1, the parvocellular channel can be followed into V2 and V4 and then into the inferior temporal parietal cortex, whereas the magnocellular channel can be traced into different regions of V2 through V3d and middle temporal (MT) and into the posterior parietal cortices (DeYoe & van Essen, 1988;Livingstone & Hubel, 1988).
Differing innervation of the two visual pathways leads to different response properties that convey information of different spatial and temporal resolutions. The heavily myelinated magnocellular channel is derived from the parasol ganglion cells with relatively large receptive fields, spanning large regions of the retina (Maunsell et al., 1999;Nassi & Callaway, 2009). The features of the parasol retinal ganglion cells enable the dorsally biased magnocellular channel to encode information with higher temporal resolution than the parvocellular channel (Pokorny & Smith, 1997; but see Maunsell et al., 1999). Conversely, the parvocellular channel is derived from the midget retinal ganglion cells, which receive input primarily from cone receptors and have smaller receptive fields (Nassi & Callaway, 2009). The features of the midget retinal ganglion cells enable the ventrally biased parvocellular channel to encode information with a higher spatial resolution than the magnocellular channel (Derrington & Lennie, 1984;Leonova et al., 2003;McAnany & Alexander, 2008).
The connections between the magnocellular channel and temporal resolution and the parvocellular channel and spatial resolution have been used to study the neural basis of several behavioral effects, ranging from spatial attention (Yeshurun, 2004) to the effects of fear on visual perception (Bocanegra & Zeelenberg, 2009). Most relevant, the asymmetric contributions of the magno-and parvocellular channels to the dorsal and ventral pathways have been proposed as a mechanism for near-hand effects on perception (Chan et al., 2013;Gozli et al., 2012). In one such study, it was hypothesized that when the participants' hands were near the stimuli then processing would be biased to the dorsal pathway, which is more predominantly magnocellular, leading to enhanced sensitivity to temporal blinks. Conversely, when the participants' hands were far from the stimuli then processing would be biased to the ventral pathway, which is more predominantly parvocellular,

Statement of Relevance
It is commonly thought that what we know can change how we see the world. For a human, one very important thing we can know about an object is whether it is a tool. In the brain, when we perceive a tool, such as a mug, our parietal cortex is more strongly recruited than when we see a nontool, such as a potted plant. In this research, we found that the difference in the parts of the brain that are recruited when we see a tool or a nontool directly impacts our perception. If we see a tool, which recruits the parietal cortex, we perceive this object with higher temporal resolution. In other words, we see tools faster than we see nontools. Conversely, when we see a nontool, there is a benefit in spatial resolution, so we see the details better. This work suggests that what you know changes what and how you see.
leading to an enhanced sensitivity to spatial gaps (Gozli et al., 2012). Consistent with differential engagement of two streams, it was observed that placing hands near the stimuli increased participants' sensitivity to temporal blinks, whereas moving hands away from the screen increased sensitivity to spatial gaps.
Knowing that semantic knowledge of manipulability biases processing to the dorsal stream and that the dorsal and ventral streams have different response profiles because of differential innervation by the magnocellular and parvocellular channels, we hypothesized that semantic knowledge of object manipulability should have consequences for perceptual processing. Thus, because semantic knowledge of an object's manipulability determines which pathway will be biased, we predicted that (a) manipulable objects that elicit a higher degree of magnocellularly biased dorsal processing would be processed with higher temporal resolution and (b) nonmanipulable objects that rely more on parvocellularly biased ventral processing would be processed with higher spatial resolution. Across five experiments, we tested these hypotheses by comparing spatial and temporal resolution across two object groups: manipulable and nonmanipulable objects.

Participants
All participants were recruited from The George Washington University's participant pool. All gave informed consent according to the university's institutional review board, were naive to the purpose of the experiments, and reported normal or corrected-to-normal vision. In Experiment 4, which utilized color stimuli, no participant reported color blindness.
For each experiment, sample sizes were chosen on the basis of behavioral studies demonstrating similar effects (i.e., Gozli et al., 2012), and participants were recruited in batches of six to 10 until at least 25 participants with accuracy above chance were accumulated. For Experiment 1, 26 undergraduate students (19 female; average age = 19.2 years; six left-handed) were recruited. No participant was removed from the analyses. For Experiment 2, 27 undergraduate students were recruited. Twenty-six participants' data (14 female; average age = 19.0 years; one left-handed) were analyzed; one participant was removed for below-chance accuracy in one of the conditions. For Experiment 3, 33 undergraduate students were recruited. Thirty participants' data (26 female; average age = 19.07 years; three left-handed) were analyzed; three participants were removed for below-chance accuracy in at least one of the conditions. For Experiment 4, 40 undergraduate students were recruited. Twenty-five participants' data (14 female; average age = 19.0 years; one left-handed) were analyzed; six participants were removed for not finishing the experiment, and nine were removed for having below-chance accuracy in at least one of the conditions. For Experiment 5, 26 undergraduate students were recruited. Twenty-five participants' data (18 female; average age = 19.5 years; none left-handed) were analyzed; one participant was removed for chance accuracy in all conditions.

Apparatus and stimuli
All experiments were presented on a 24-in. Acer GN246 HL monitor with a refresh rate of 144 Hz, positioned at a distance of 60 cm from the viewer in a dark room. The experiment was generated and presented using PsychoPy (Version 1.82).
The stimuli in Experiments 1, 2, 4, and 5 consisted of line drawings of real-world, everyday objects obtained from The Noun Project, an online repository of object icons and clip art (https://thenounproject .com). The object stimuli consisted of 10 line drawings of manipulable objects and 10 line drawings of nonmanipulable objects. The manipulable objects were a snow shovel, handsaw, plunger, screwdriver, hammer, wrench, knife, bottle opener, spatula, and mug. The nonmanipulable objects were a fire hydrant, picture frame, window, toilet, candle, garbage can, water fountain, potted plant, fan, and lamp. The stimuli were displayed as large as possible in a 4° × 4° area, and all stimuli were presented in black (hue, saturation, value [HSV] = 0, 0, 0) on a dark-gray background (HSV = 0, 0, 50). All objects are displayed in Figure 1. We controlled for low-level differences between objects by calculating the mean luminance, size, aspect ratio (i.e., measure of elongation), and spatial frequency (i.e., average distance from origin of the points calculated from a 2D fast Fourier transform) for each object. Independent-samples t tests confirmed that there were no mean differences between object groups in luminance, t(18) = −0.960, p = .350; mean size, t(18) = 1.043, p = .311; aspect ratio, t(18) = 1.209, p = .242; or spatial frequency, t(18) = −0.155, p = .879. The width of the bottom line was controlled for each object on which the gap appeared. There was no significant difference between manipulable and nonmanipulable objects in bottom-line width, independent-samples t test: t(18) = 0.922, p = .369.
For Experiment 3, the stimuli consisted of images of real-world, everyday objects obtained from Google image searches and manipulated in GIMP (Version 2.10.4). The object stimuli consisted of 10 images of the same manipulable objects and 10 images of the same nonmanipulable objects used in Experiments 1, 2, 4, and 5. The stimuli were displayed as large as possible in a 4° × 4° area, and all stimuli were presented in color on a dark-gray background (HSV = 0, 0, 50). All objects are displayed in Figure 1.

Task
In Experiment 1, each trial began with a single central fixation point, which subtended a visual angle of 1° × 1°. The central fixation point was rendered in white (HSV = 0, 0, 100). After 1,000 ms of fixation presentation, a single object line drawing would appear 4° to the left or right of the fixation point. The side of presentation was counterbalanced such that each participant saw an equal number of stimuli on the left and on the right. The stimuli were displayed as large as possible within a 4° × 4° area, and all stimuli were presented in color on a dark-gray background (HSV = 0, 0, 50).
In one half of the experiment, the object appeared with or without a spatial gap in the center of the bottom line of the object. The objects had an equal probability of having a spatial gap or not having a spatial gap. The objects appeared for 100 ms, and participants were to report the presence of the spatial gap by pressing the right control button (present) or the left control button (absent) on the keyboard. Feedback was presented on incorrect trials only.
A staircase procedure was used to calibrate the size of the gap to each object to ensure that the gap was equally perceptible across each object regardless of each object's individual characteristics (Pelli & Bex, 2013). Sixteen undergraduate students (13 female; average age = 18.8 years; four left-handed) from The George Washington University participated in exchange for course credit. In each trial, the object would be presented with or without the spatial gap, which would begin at a size of 0.025 o visual angle. If the gap was detected correctly for two consecutive trials, the gap decreased by one 0.005 o step. If the gap was missed, or if a false alarm was made to the absence of a gap, the gap was increased by one 0.005 o step for two consecutive trials. The gap was calibrated until 20 trials had been completed or until the staircase reversed direction (two correct followed by two incorrect, or vice versa) three times. The gap size for each object is displayed in Figure 1.
In the other half of the experiment, the object appeared with or without a temporal blink. The objects had an equal probability of having a temporal blink or not having a temporal blink. The blinks were 16 ms long. The object first appeared for 96 ms, followed by the temporal blink, and the object then appeared a second time for 32 ms. Participants were asked to report the presence of the temporal blink by pressing the right control key on the keyboard and to report the absence of the temporal blink by pressing the left control key. Feedback was presented on incorrect trials only.
The presentation of stimulus type was counterbalanced across participants such that half of the participants were first presented with spatial gaps and half of the participants were first presented with temporal blinks. Participants completed a total of 720 trials broken into six blocks, 360 trials of spatial gaps and 360 trials of temporal blinks.
Experiment 2 was identical to Experiment 1 except that each stimulus was presented upside down. The spatial gap was presented in the same place as in Experiment 1, relative to the object.
Experiment 3 was identical to Experiment 1 except that more realistic images were used instead of line drawings. A staircase procedure, similar to the one used with the line-drawing stimuli, was used to calibrate the size of the gap to each object to ensure that the gap would be equally perceptible across each object regardless of each object's individual characteristics (Pelli & Bex, 2013). Twenty-three undergraduate students (18 female; average age = 18.9 years; four left-handed) from The George Washington University participated in exchange for course credit, and the procedure from Experiment 1 was used. The gap size for each object is displayed in Figure 1.
Experiment 4 was similar in trial structure to the temporal-blink condition of Experiment 1 except that instead of temporal-blink detection, each trial had a temporal blink that was either short or long in duration. The objects had an equal probability of having a short or a long temporal blink. In the short-blink condition, the object first appeared for 96 ms, followed by a 16-ms temporal blink, and then the object appeared a second time for 32 ms. In the long-blink condition, the object first appeared for 64 ms, followed by a 48-ms temporal blink, and then the object was presented a second time for 32 ms. Participants were to report a short temporal blink by pressing the "c" key on the keyboard and a long temporal blink by pressing the "m" key. Feedback was presented exclusively on incorrect trials. Participants completed a total of 720 trials broken into six blocks, 360 trials of short-blink duration and 360 trials of long-blink duration.
Experiment 5 was identical to the spatial-gap procedure of Experiment 1 except that the background color was either green (HSV = 110, 30, 90) or red (HSV = 5, 30, 90). The background color was counterbalanced across participants such that half of the participants would first be presented with the green background and half of the participants would first be presented with the red background. Participants completed a total of 720 trials broken into six blocks, 360 trials of green backgrounds and 360 trials of red backgrounds.

Experiment 1
The spatial and temporal paradigms used d ′ as a measure of perceptual sensitivity (Fig. 2). A two-way repeated measures analysis of variance (ANOVA) was conducted on d ′ with object group (manipulable, nonmanipulable) and task type (gap, blink) as within-subjects variables (Fig. 2c, left). The ANOVA revealed no significant main effect of object group, F(1, 25) = 2.342, p = .138, η p 2 = .086, but a significant main effect of task type, F(1, 25) = 50.953, p < .001, η p 2 = .671; d ′ sensitivity was higher for blinks than for gaps, M gaps = 1.926, 95% confidence interval (CI) = [1.673, 2.179], M blinks = 2.967, 95% CI = [2.680, 3.254]. Importantly, and consistent with the hypothesis of differential engagement of two pathways depending on object utility, results revealed a significant twoway interaction between object group and task type, In both experiments, participants maintained fixation on the center cross. An object appeared to the left or right of fixation. For the spatial-gap paradigm (a), participants pressed a key to indicate whether the bottom line of the presented object contained a gap. For the temporal-blink paradigm (b), participants pressed a key to indicate whether the object flickered. At the end of every trial, participants were given feedback about whether their response was correct or incorrect. The graphs (c) show perceptual sensitivity (d ′ ) for each combination of object group (manipulable, nonmanipulable) and task type (spatial gap, temporal blink), separately for Experiment 1 (left), in which object stimuli were presented upside up, and Experiment 2 (right), objects were presented upside down. Dots represent individual data, the bars show the means, and the error bars represent the standard error.
F(1, 25) = 6.772, p = .015, η p 2 = .213, driven by the difference between object groups in the gap condition, F(1, 25) = 9.888, p = .004, η p 2 = .283; nonmanipulable objects had a higher d ′ for gaps than manipulable objects (M nonmanipulable = 2.021, 95% CI = [1.756, 2.285]; M manipulable = 1.831, 95% CI = [1.589, 2.073]). The interaction effect and the driving simple main effect were consistent with the prediction that nonmanipulable objects, given their higher reliance on the ventral pathway, should yield higher sensitivity in the detection of spatial gaps than manipulable objects. Notably, the expected higher sensitivity to temporal gaps in the manipulable object set was not supported. The possibility of an advantage for manipulable objects in temporal sensitivity is further examined in Experiment 4.

Experiment 2
To further probe the hypothesis that semantic knowledge of object utility (manipulable or nonmanipulable) biases the pathway that will ultimately process the object, we designed Experiment 2 to demonstrate that the hypothesized pathway biasing will not occur if the semantic identity of the objects is obscured (Firestone & Scholl, 2016). The same paradigm and objects from Experiment 1 were used, but the objects were inverted (upside down). The inversion preserved the low-level features of each object while impairing participants' ability to rapidly recognize and access their semantic knowledge of the objects' function. Inversion has been found to interfere with recognition of faces and objects (Diamond & Carey, 1986). Following the hypothesis that semantic knowledge of objects' utility drives the perceptual difference between manipulable and nonmanipulable objects, we predicted that object inversion should reduce the difference between manipulable and nonmanipulable objects. A two-way repeated measures ANOVA was conducted on d ′ with object group (manipulable, nonmanipulable) and task type (gap, blink) as within-subjects variables. The ANOVA revealed a significant main effect of object group , F(1, 25) Fig. 2c, right). There was also a significant main effect of task type, F(1, 25) = 82.298, p < .001, η p 2 = .767; d ′ was higher for blinks than for gaps (M gaps = 1.782, 95% CI = [1.476, 2.088]; M blinks = 3.023, 95% CI = [2.723, 3.320]). As predicted, there was no significant interaction between object group and task type, F(1, 25) = 0.508, p = .483, η p 2 = .020. To assess the nonsignificant interaction effect, we conducted a Bayes factor (BF) analysis using the Bayesian repeated measures ANOVA in JASP (van den Bergh et al., 2020) comparing the posterior probability of a model with the main effects of task type and object group but no interaction term as the null hypothesis (H 0 ) to the posterior probability of a full model with the main effects and the interaction term as the alternative hypothesis (H 1 ). The BF analysis evaluating the nonsignificant interaction effect yielded a BF 01 of 4.83, indicating substantial evidence for the null hypothesis that does not include the interaction (Kass & Raftery, 1995).
Last, in order to statistically demonstrate that results of the inversion experiment are indeed different from those observed in Experiment 1, we subjected the data to a between-subjects ANOVA with object orientation (upright, inverted) as a between-subjects variable and object group and task type as within-subjects factors. The ANOVA revealed a significant three-way interaction between object group, task type, and orientation, F(1, 50) = 5.381, p = .024, η p 2 = .097; as predicted, orientation significantly reduced the effect for the inverted objects.

Experiment 3
Although Experiment 1 provides evidence for biased engagement of the ventral pathway for nonmanipulable-object perception in a spatial task, it could be argued that despite careful low-level feature controls (e.g., luminance, size, elongation, spatial frequency), an uncontrolled low-level difference between manipulable and nonmanipulable objects is responsible for driving the manipulable versus nonmanipulable advantage in the spatial-gap task.
Experiment 3 used the same paradigm as Experiment 1 except that the line drawings were replaced by real-world images of corresponding objects (e.g., a line drawing of a candle was replaced by a picture of a candle; Fig. 1b). The prediction remained the same as in the original experiment. If object semantic knowledge determines which visual pathway object processing is biased toward, then nonmanipulable objects will be biased toward the ventral pathway, leading to higher sensitivity (as measured by d ′ ) in the spatial-gap task. This would replicate the pattern of performance observed in Experiment 1. A two-way repeated measures ANOVA was conducted on d ′ with object group (manipulable, nonmanipulable) and task type (gap, blink) as within-subjects variables. The ANOVA revealed a significant main effect of object group, F(1, 29) = 28.211, p < .001, η p 2 = .493, with nonmanipulable objects having a higher sensitivity than manipulable objects (M nonmanipulable = 2.361, 95% CI = [2.054, 2.668]; M manipulable = 2.047, 95% CI = [1.773, 2.320]; Fig. 3). There was also a significant main effect of task type, F(1, 29) = 8.413, p = .007, η p 2 = .225, with blinks having a higher average sensitivity than gaps (M gaps = 1.998, 95% CI = [1.758, 2.238]; M blinks = 2.410, 95% CI = [2.068, 2.751]). Importantly, a two-way interaction between object group and task type was also significant, F(1, 29) = 13.061, p = .001, η p 2 = .311, driven by a simple main effect in the gap condition, F(1, 29) = 46.424, p < .001, η p 2 = .616; nonmanipulable objects had a higher d ′ for gaps than did the manipulable objects (M nonmanipulable = 2.238, 95% CI = [1.982, 2.494]; M manipulable = 1.758, 95% CI = [1.535, 1.981]). In addition to serving as a low-level control and a further test of the hypothesis, these results replicated those of Experiment 1, providing strong additional support for the hypothesis that the perceptual differences between manipulable objects and nonmanipulable objects are due to the semantic content of the objects.

Experiment 4
Although the first three experiments yielded strong supporting evidence that gap detection is better on nonmanipulable objects than manipulable objects, perhaps resulting from bias toward the ventral pathway, manipulable objects did not elicit an advantage in temporal resolution, failing to provide evidence of a dorsalpathway bias for manipulable objects. One explanation for the null effect is that the temporal-gap-detection task could also be construed as an abrupt-onset-detection task because of the way the object suddenly reappeared after the temporal blink. Abrupt onsets are known to be highly salient and easily detectable (Yantis & Jonides, 1984) and evoke strong activity in the lateral intraparietal area, a known attentional area with strong connections to the superior colliculus (Kusunoki et al., 2000). The strong connection between the lateral intraparietal area and the superior colliculus provides a mechanism by which abrupt onsets may bypass the parvo-and magnocellular visual-pathways framework that we aimed to test, thereby collapsing any differences in the d ′ observed for manipulable and nonmanipulable objects. Another possible explanation for the null effect could be that the temporal-gap-detection task was too easy (ceiling effect), as evidenced by high d ′ values in the temporal task. In order to have a task that is more difficult and is more specifically targeted to the high temporal resolution of the magnocellular channel, we redesigned the blink paradigm used in Experiments 1 through 3 as a discrimination task. 1 Objects were presented for 80 ms, removed from the screen for either a 16-ms or 48-ms blink, and redisplayed for 48 ms (Fig. 4). Participants' task was to indicate whether the blink duration was short or long. We calculated d ′ with short blinks considered as hits or misses and long blinks considered as correct rejections or false alarms. The prediction was that manipulable objects should have a higher d ′ than nonmanipulable objects because of the increased magnocellular input, and therefore temporal resolution, of the dorsal pathway. A t test was used to analyze the difference between manipulable and nonmanipulable objects in d ′ (M nonmanipulable = 1.562, 95% CI = [1.354, 1.770]; M manipulable = 1.620, 95% CI = [1.385, 1.855]), but no significant effect was found, t(48) = −0.362, p = .719.
These results-specifically, the simple main effect in the short-gap condition with manipulable objects having a shorter RT than nonmanipulable objects-lend some support to our hypothesis that manipulable objects would elicit higher temporal resolution than nonmanipulable-object processing. Although this result is not as strong as the observed benefit for nonmanipulable objects in spatial resolution, nor is it manifested in sensitivity, it does suggest that an advantage in temporal resolution is indeed present for manipulable objects. Investigating the exact nature of this temporalresolution advantage may be a fruitful direction for future studies.

Experiment 5
The last test of our hypotheses was derived from the neurophysiological differences between the dorsal and ventral pathways. It was reasoned that if the differences in spatial-gap sensitivity for nonmanipulable objects are mechanistically derived from the magnocellular and parvocellular dichotomy of the two visual pathways, then the effect should be modulated through manipulation of the processing within the pathways. Ambient red light has been shown to suppress activity in the magnocellular channel because of the large number of visually responsive cells with red-inhibitory surrounds in their receptive fields (Wiesel & Hubel, 1966) and has been used to demonstrate the contribution of magnocellular processing in other behavioral effects, such as fear-processing and near-hand effects (Abrams & Weidler, 2014;West et al., 2010). To test our hypothesis that the spatial-resolution difference between manipulable and nonmanipulable objects observed in Experiments 1 and 3 was due to the differential input of the magnocellular channel to the two visual streams, we used the spatialresolution paradigm from Experiment 1 and manipulated the background color to vary between green and red (Fig. 5a). Because of the suppression of the magnocellular channel by red light, it was predicted that the color of the background should modulate the perceptual difference in spatial gap detected in Experiment 1. A two-way repeated measures ANOVA was conducted on d ′ with object group (manipulable, nonmanipulable) and background color (green, red) as within-subjects variables. The ANOVA revealed a significant main effect of object group , F(1, 25)  a second replication of the spatial effect seen in Experiments 1 and 3. There was no significant main effect of background color, F(1, 25) = 2.138, p = .156, η p 2 = .079, but, as predicted, there was a significant two-way interaction between object group and background color, F(1, 25) = 4.444, p = .045, η p 2 = .151; the effect was increased with the red background. It was hypothesized that if the perceptual differences between manipulable and nonmanipulable objects are mechanistically derived from the magno-and parvocellular processing in the two visual pathways, then suppression of the magnocellular processing with red light should modulate the effect. The results of Experiment 5 supported our hypothesis by demonstrating an increase of the perceptual difference between manipulable and nonmanipulable objects with red light.

Discussion
We hypothesized that manipulable and nonmanipulable objects, because of differential recruitment of the visual pathways, would elicit perceptual differences stemming from the characteristic spatial and temporal resolutions associated with the magno-and parvocellular inputs.
In five experiments, we found strong evidence in support of behavioral consequences driven by physiological and anatomical differences of the two visual pathways, and we argue that semantic knowledge of object manipulability guides processing along a particular pathway. If an object has a strong action association, processing is largely determined by activity in the dorsal pathway that is not found with objects that lack strong action association (Chao & Martin, 2000;Mahon et al., 2007;Noppeney et al., 2006). The increased level of activity in the parietal regions endows the perception of action-associated objects with greater access to the magnocellular channel that preferentially courses through the dorsal pathway. Without the enhanced dorsal-pathway activity evoked by action associations, object processing is more dependent on the ventral pathway, which has a higher ratio of parvocellular channel input than does the dorsal pathway (Baizer et al., 1991;Ferrera et al., 1992;Maunsell et al., 1990). Because of the differential magno-and parvocellular input to the two pathways, we predicted that the perception of objects with strong action associations would result in an increased temporal resolution and the perception of objects without such associations would result in an increased spatial resolution. In the experiments reported here, we found strong evidence that objects are perceived with different spatial resolutions and some evidence that objects are perceived with different temporal resolutions, depending on object semantic knowledge of manipulability. In two follow-up control experiments, further evidence was provided in support of the hypothesis that the difference between manipulable and nonmanipulable objects in the spatial task was driven by the semantic knowledge of the objects rather than possible low-level visual features. Namely, it was observed that the effect is curtailed by impeding the access of object semantic knowledge through inversion and that the perceptual differences between manipulable and nonmanipulable objects replicates with a separate set of more realistic object images (with different low-level properties). Last, to test whether the differing proportions of magno-and On each trial (a), participants maintained fixation on the center cross. An object would then appear to the left or right of fixation. As in Experiment 1, participants were asked to detect the presence of a small gap in the bottom outline of the object, after which they were given feedback about whether their response was correct or incorrect. The background color (red or green) was manipulated. Perceptual sensitivity (d ′ ) is shown for each combination of object group (manipulable, nonmanipulable) and background color (green, red). Dots represent individual data, the bars show the means, and the brackets show the standard error. parvocellular input are responsible for the perceptual differences that were observed in our first four experiments, ambient red light was used to suppress activity of the magnocellular channel. It was observed that this increased the perceptual difference in the spatial task between manipulable and nonmanipulable objects.
It is important to be cognizant of the few limitations of our work in terms of generalizability, stemming from two sources: the participants and the stimuli. Our participants were all college-age adults, with an average age of 19, sampled from undergraduate psychology courses at The George Washington University. There is no reason to suspect that the observed findings are specific to this population, but further research will have to be done to directly probe this question. Additionally, our stimuli were chosen so as to have maximal experimental control while still using some variety of manipulable and nonmanipulable objects. Further research could examine whether the effects found in this research generalize not just to other exemplars of manipulable and nonmanipulable objects but also to objects with varying degrees of manipulability. Furthermore, except in Experiment 3, line drawings were used in all of our experiments. Although the findings in Experiment 3 do suggest that the results found in these experiments are not unique to line drawings and do generalize to more realistic images, it is yet unknown whether the results will generalize to real-world scenarios.
On the basis of the evidence provided, we argue that semantic knowledge of object manipulability, as defined by strong associations with an action appropriate for the item, generates the perceptual differences between manipulable and nonmanipulable objects. Previous studies have shown that manipulable objects evoke a larger degree of dorsal-pathway activity than do nonmanipulable objects (Almeida et al., 2013;Chao & Martin, 2000). However, the origin of this difference and of consequent perceptual differences between manipulable and nonmanipulable objects is poorly understood.
We propose that the differences we observe between manipulable and nonmanipulable objects derive directly from the near-hand effect reported by Gozli et al. (2012) in which stimuli presented proximally to the participants' hands evoked a benefit in spatial resolution, similar to the nonmanipulable objects used in the experiments presented here, and stimuli presented distally from the participants' hands evoked a benefit in temporal resolution. A possible mechanism supporting a connection between the near-hand effect and the manipulable versus nonmanipulable effect presented here derives from the bimodal cells in the anterior parietal cortex (Graziano & Gross, 1993). The bimodal cells have a somatosensory receptive field covering a part of the hand (a property possibly underlying the near-hand effect) and a visual receptive field corresponding to the visual field near the associated hand area. When an object is shown, if that object does not have a strong action association, then the activity of the bimodal cells is unaffected. After the organism has learned to manually manipulate the object, the bimodal cells respond to visual presentation of the object even without somatosensory input (Zhou & Fuster, 2000). Thus, the bimodal cells, with their responsiveness to hand location and their ability to become visually activated, may be a driver of the dorsal pathway's object selectivity (Freud et al., 2016;Kastner et al., 2017;Vaziri-Pashkam & Xu, 2017), leading to the dorsalpathway bias evoked by manipulable objects (Chao & Martin, 2000). The object-specific neural activity in the dorsal pathway, divorced from its need for somatosensory input and largely derived from magnocellular input, could then be read out during an identification task, leading to heightened temporal precision for manipulable objects.
Taken together, our results demonstrate that object semantic knowledge determines the processing bias of the object and evokes subsequent behavioral repercussions for perception and for action. This finding, in conjunction with the finding that manipulability has an influence on how attention interacts with object perception (Gomez et al., 2018), may point to manipulability, supported by a detailed neural mechanism, as an exemplar of cognitive penetrability. Additionally, our work underscores the need for careful consideration of object semantic knowledge, and its subsequent possible bias to either the dorsal or ventral pathway, when object images are used not only in psychological research but also in applied settings, such as display and product designs, environmental design, and the design of various cognitive assistants.

Transparency
Action Editor: Karen Rodrigue Editor: Patricia J. Bauer Author Contributions D. Dubbelde and S. Shomstein designed the study. Testing, data collection, and data analysis were performed by D. Dubbelde under the supervision of S. Shomstein. D. Dubbelde and S. Shomstein wrote the manuscript. Both authors approved the final manuscript for submission.

Declaration of Conflicting Interests
The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.

Funding
This work was supported by National Science Foundation Grants No. BCS-1921415 and No. BCS-2022572 to S. Shomstein. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Open Practices
All data and materials have been made publicly available via OSF and can be accessed at https://osf.io/fbm7s/. The design and analysis plans for the experiments were not preregistered. This article has received the badges for Open Data and Open Materials. More information about the Open Practices badges can be found at http://www .psychologicalscience.org/publications/badges. Note 1. We thank Ed Awh, from The University of Chicago, for the helpful suggestion.