Abstract
The cognitive and neural bases of visual perception are typically studied using pictures rather than real-world stimuli. Unlike pictures, real objects are actionable solids that can be manipulated with the hands. Recent evidence from human brain imaging suggests that neural responses to real objects differ from responses to pictures; however, little is known about the neural mechanisms that drive these differences. Here, we tested whether brain responses to real objects versus pictures are differentially modulated by the “in-the-moment” graspability of the stimulus. In human dorsal cortex, electroencephalographic responses show a “real object advantage” in the strength and duration of mu (μ) and low beta (β) rhythm desynchronization—well-known neural signatures of visuomotor action planning. We compared desynchronization for real tools versus closely matched pictures of the same objects, when the stimuli were positioned unoccluded versus behind a large transparent barrier that prevented immediate access to the stimuli. We found that, without the barrier in place, real objects elicited stronger μ and β desynchronization compared to pictures, both during stimulus presentation and after stimulus offset, replicating previous findings. Critically, however, with the barrier in place, this real object advantage was attenuated during the period of stimulus presentation, whereas the amplification in later periods remained. These results suggest that the “real object advantage” is driven initially by immediate actionability, whereas later differences perhaps reflect other, more inherent properties of real objects. The findings showcase how the use of richer multidimensional stimuli can provide a more complete and ecologically valid understanding of object vision.
INTRODUCTION
The cognitive and neural mechanisms of object vision in humans have been studied predominantly using artificial proxies for reality, typically in the form of 2-D pictures of objects. Experimenters have relied on pictures as stimuli because they are convenient to obtain, easy to manipulate for lower-level parameters such as luminance and size, and straightforward to control temporally. Nevertheless, if the ultimate aim of this research is to understand naturalistic vision, it is crucial that we test whether the behaviors and underlying neural mechanisms triggered by artificial proxies for reality are comparable with those of their real-world counterparts (Snow & Culham, 2021).
There are philosophical and empirical reasons to expect that viewing pictures may engage different cognitive and neural processes than viewing real-world objects (Snow & Culham, 2021; Freud, Behrmann, & Snow, 2020; Erlikhman, Caplovitz, Gurariy, Medina, & Snow, 2018). Perhaps most importantly, unlike pictures, real objects are tangible solids that can be grasped and manipulated with the hands. Actionability—the potential to act meaningfully with real objects, such as tools—develops rapidly during the early years of life and is critical for survival (Simcock & DeLoache, 2008; Pierroutsakos & DeLoache, 2003). Although pictures of objects, especially tools, can be associated with actions, or concepts of actions (sometimes referred to as “affordances”), the pictures themselves are not inherently graspable and they cannot be used to manipulate the environment in the same way that real objects can.
Behavioral and neuroimaging studies have begun to compare real object versus picture processing while carefully controlling important stimulus attributes such as appearance and timing (Romero & Snow, 2019). Emerging evidence from these studies suggests that real objects are processed differently than their picture proxies, and these differences are evident across a range of cognitive domains. For example, compared to pictures, real objects have been shown to bias perception (Holler, Fabbri, & Snow, 2020; Romero, Compton, Yang, & Snow, 2018), capture attention (Gomez, Skiba, & Snow, 2018), bolster memory (Snow, Skiba, Coleman, & Berryhill, 2014), alter gaze patterns in infants (Sensoy, Culham, & Schwarzer, 2021), facilitate recognition in neuropsychological patients (Holler, Behrmann, & Snow, 2019; Farah, 2004; Turnbull, Driver, & McCarthy, 2004; Chainay & Humphreys, 2001; Riddoch & Humphreys, 1987; Ratcliff & Newcombe, 1982), and modulate higher-level cognitive processes such as valuation (Romero et al., 2018; Bushong, King, Camerer, & Rangel, 2010), social cognition (Risko, Laidlaw, Freeth, Foulsham, & Kingstone, 2012; Kingstone, 2009), and executive function (Beaucage, Skolney, Hewes, & Vongpaisal, 2020).
Stimulus format influences not only cognition but also goal-directed actions. For example, when adult observers are asked to grasp 2-D images of objects, grip aperture varies relative to the size of the stimulus, in agreement with the psychophysical principles of Weber's law (Ozana, Namdar, & Ganel, 2020; Ozana & Ganel, 2019; Hosang, Chan, Jazi, & Heath, 2016; Holmes & Heath, 2013). The same is not the case, however, for real objects for which grip aperture conforms to the absolute metrics of the stimulus, irrespective of its size, thus evading perceptual biases and producing more accurate and efficient visuomotor performance (Heath, Mulla, Holmes, & Smuskowitz, 2011; Ganel, Chajut, & Algom, 2008). Evidence from fMRI also shows that grasping of real objects leads to different patterns of brain activation compared to picture grasping (Freud et al., 2018).
Differences in cognition and action between real objects and pictures are, in fact, apparent early during development, and they appear to be closely related to children's sensorimotor experience with objects. For example, 7-month-old infants show a visual preference for real toys versus 2-D pictures of toys (Gerhard, Culham, & Schwarzer, 2016; DeLoache, Strauss, & Maynard, 1979), and this preference is related to the amount of time spent handling the object: The more infants tend to move their fingers over a toy during a manual exploration task, the stronger the real object preference (Gerhard, Culham, & Schwarzer, 2021). Although infants at 9 months old often try to grasp pictures of toys off the page (DeLoache, Pierroutsakos, Uttal, Rosengren, & Gottlieb, 1998), more realistic pictures tend to elicit more attempts at interaction than do less realistic pictures (Pierroutsakos & DeLoache, 2003), and as infants gain experience with pictures, they become less inclined to grasp depicted objects and more inclined to make communicative gestures, like pointing actions, toward them (Pierroutsakos & Troseth, 2003). Thus, sensorimotor interaction with real objects helps infants to learn and understand the differences between real objects and pictures, despite resemblances in their appearance (Gerhard et al., 2021; Sensoy et al., 2021; Adolph, Eppler, & Gibson, 1993; Gibson, 1988).
These unique effects of real objects (vs. pictures) on behavior and brain responses have sometimes been referred to as a “real-exposure effect” (Bushong et al., 2010) or “real object advantage” (Gerhard et al., 2021; Snow & Culham, 2021; Holler et al., 2019; Chainay & Humphreys, 2001). One potential mechanism for the real object advantage, which follows from the behavioral and neuroimaging evidence reviewed above, and which stems from the fact that real objects are actionable whereas pictures are not, is that real objects invoke visuomotor brain networks involved in automatic action planning more so than images do (Marini, Breeding, & Snow, 2019; Gomez & Snow, 2017; Gallivan, Cavina-Pratesi, & Culham, 2009). Previous studies have established that regions of the human dorsal cortex are specialized for processing pictures of action-relevant stimuli, such as graspable objects and tools, without the performance of any action (e.g., Matić, Op de Beeck, & Bracci, 2020; Cardellicchio, Sinigaglia, & Costantini, 2011; Konen & Kastner, 2008; Valyear, Cavina-Pratesi, Stiglick, & Culham, 2007; Creem-Regehr & Lee, 2005; Chao & Martin, 2000). The idea that real objects might invoke these networks more so than images do is supported by behavioral evidence from studies that have manipulated the actionability of real objects by placing a transparent barrier between the observer and the stimulus. For example, Gomez et al. (2018) found that real objects had a stronger influence on attention compared to matched 2-D and 3-D stereoscopic pictures of the same items, but only when the stimuli were accessible for grasping; the effect disappeared when the stimuli were presented out of reach or behind a transparent barrier. Similarly, imposition of a barrier eliminates the increased monetary value ascribed to real objects, such as snack foods and trinkets, versus their pictures (Bushong et al., 2010).
Although previous fMRI (Freud et al., 2018; Snow et al., 2011) and electroencephalography (EEG; Marini et al., 2019) studies have revealed differences in brain responses to real objects versus matched pictures, none of these studies explored whether brain responses are modulated by imposition of a barrier. EEG is an ideal technique to address this question because it can provide fine-grained information about the time course of cortical dynamics of networks recruited during automatic action planning, as measured by stimulus-driven electrical changes at the surface of the scalp (Makeig et al., 2002). In particular, the EEG signal can be decomposed to reveal frequency-specific changes associated with different cognitive processes (Başar, Başar-Eroğlu, Karakaş, & Schürmann, 1999; Klimesch, 1999), one of which is desynchronization of the mu (μ) rhythm (8–13 Hz)—a signal associated with the transformation of visual object information into action representations. Desynchronization, including that of α, μ, and β rhythms, is a correlate of activated cortical networks (Pfurtscheller, 2001) and is related to fMRI response amplitude (Laufs et al., 2003). The μ rhythm originates in the primary sensorimotor and premotor cortices and is recorded over central electrodes (Pfurtscheller, Neuper, Andrew, & Edlinger, 1997). Although the μ rhythm is most commonly defined as being concentrated around 8–13 Hz in the alpha band, it is sometimes characterized as having a second component (15–30 Hz) in the beta band (Angelini et al., 2018; Festante et al., 2018; Pineda, 2005). Event-related desynchronization (ERD) of both μ and low β rhythms is elicited during actual and observed actions with the hands (Festante et al., 2018; Bizovičar, Dreo, Koritnik, & Zidar, 2014; Zaepffel, Trachel, Kilavik, & Brochier, 2013; Hari, 2006; Pineda, 2005; Muthukumaraswamy & Johnson, 2004; Pfurtscheller et al., 1997). Importantly, μ desynchronization is also elicited when observers look at pictures of objects, such as tools, that are strongly associated with hand actions (Wamain, Gabrielli, & Coello, 2016; Suzuki, Noguchi, & Kakigi, 2014; Proverbio, 2012; Proverbio, Adorni, & D'Aniello, 2011).
Leveraging the previous EEG results showing μ rhythm desynchronization in response to pictures of tools, Marini et al. (2019) used EEG to measure cortical responses when participants made perceptual judgments about real tools versus matched pictures of the same items. In line with the prediction that real objects should invoke visuomotor brain networks involved in automatic action planning more so than pictures do, the authors found that on randomly interleaved trials in which real objects were shown, μ and β rhythm ERD was stronger and more sustained in comparison to trials in which pictures were displayed. Interestingly, the amplification in μ and β responses to real objects (vs. pictures) was apparent at distinct time points on each trial: One set of clusters emerged rapidly after stimulus onset and during the period of stimulus presentation, whereas subsequent clusters emerged after stimulus offset. This global temporal pattern in EEG signatures of the real object advantage led the authors to speculate as to whether the “early” and “late” effects reflected the contribution of different cortical mechanisms (Marini et al., 2019), but ultimately, the study left open the question of whether the real object advantage was because of actionability or some other difference between the real objects and pictures.
Here, advancing on the work of Marini et al. (2019), we tested whether the EEG signature of the real object advantage is modulated by the presence of a transparent barrier that prevents in-the-moment graspability of the stimuli. We recorded EEG while observers made effort-to-use judgments about everyday tools. The stimuli were presented either as real objects or as high-resolution 2-D pictures of the same items that were matched closely for retinal size, background, color, and illumination. The stimuli were presented on a custom-built rotating drum so that the order of the real objects and pictures was randomly interleaved across trials. Critically, in separate testing sessions, the stimuli were presented either unobstructed and within reach of the observer or behind a transparent barrier that was positioned between the observer and the stimulus. Importantly, whereas the barrier imposes an environmental constraint that disrupts in-the-moment graspability, other attributes such as the egocentric distance and visual appearance of the stimuli remain identical across conditions. Nevertheless, although the barrier prevents in-the-moment grasping of the (real) objects, it does not alter the fact that real objects are inherently graspable (whereas pictures are not).
On the basis of previous studies, we expected that looking at everyday tools would elicit robust event-related μ and low β rhythm desynchronization (Wamain et al., 2016; Suzuki et al., 2014; Proverbio, 2012; Proverbio et al., 2011) and that tools displayed as real objects would trigger stronger and more sustained ERD than do matched pictures of the same objects when there is no barrier in place (Marini et al., 2019). Critically, we predicted that, if the real object advantage arises because solids lend themselves to in-the-moment grasping whereas pictures do not, then imposition of a barrier should reduce or eliminate the stronger event-related μ and β desynchronization signals for real objects versus pictures.
METHODS
Experimental Procedure
Participants
Thirty-two healthy, right-handed University of Nevada, Reno, students volunteered for the experiment. Data from eight participants could not be included in the final analyses either because of technical issues or because the participant did not return for the second session. This left 24 participants (nine men) to be included in the final analyses, which equaled the sample size used by Marini et al. (2019). Mean ± SD age of the participants was 21.4 ± 4.8 years (n = 23); one participant did not provide age demographics. All participants reported normal or corrected-to-normal vision, reported no history of neurological impairments, and provided written and oral informed consent as required by the University of Nevada, Reno, institutional review board.
Setup and Stimuli
The stimuli and display setup used in this study are identical to those used by Marini et al. (2019), with the exception of the introduction of a barrier condition, which is described in detail below.
The experimental stimuli consisted of 96 real-world objects and 96 2-D printed photographs of the same items (Figure 1A). The stimulus set included 16 types of garage tools and 16 types of kitchen tools, with three different exemplars of each. All stimuli were presented with the handle facing rightward. The photographs were printed in high resolution on matte paper and mounted on 10 × 14 in. matte black foam-core boards. Small magnets were fixed to the rear side of each real object and each foam-core board to enable them to be mounted to the presentation apparatus.
The presentation apparatus was a custom-built four-sided rotating drum (22 × 14 in.) positioned on a small table (Figure 1B). The stimuli were affixed to opposite sides of the drum. This allowed experimenters to exchange a previously displayed stimulus (n − 1) on one side of the drum with a new stimulus (n + 1), while the participant viewed the current stimulus (n) on the opposite side. The side of the drum facing the participant was illuminated by striplight LEDs that were mounted around the inside periphery of a rectangular frame positioned in front of the drum.
In half of the sessions, a transparent barrier was interposed between the participant and the stimuli. The acrylic glass barrier material was selected to minimize glare and reflections. The barrier could be easily detached or reattached to the front of the frame housing the LEDs in between testing sessions. The barrier's outer frame measured 24.4 in. in height and 18.8 in. in width. The aperture within this frame, through which stimuli were visible, measured 20.8 in. in height and 13.9 in. in width.
Behavioral Paradigm
Participants completed the experiment across two separate sessions that differed in stimulus accessibility. One session consisted of an exact replication of Marini et al.'s (2019) study, which we refer to as the “No-Barrier” condition. In a separate session, which we refer to as the “Barrier” condition, the transparent acrylic barrier was placed between the participant and the stimuli mounted on the rotating drum. The Barrier and No-Barrier testing sessions took place on different days, with the order of sessions counterbalanced across participants. We did not point out the presence or absence of the barrier to participants. The stimuli, task, and experimental protocol were otherwise identical across sessions.
Participants were seated 16 in. away from the rotating drum with head and eye height aligned to the center of the front face of the drum. Participants wore earphones throughout the experiment to attenuate ambient noise. Stimulus presentation and trial timing were regulated using computer-controlled visual occlusion spectacles (PLATO, Translucent Technologies Inc.), which can be switched between “closed” (opaque) and “open” (transparent) states with millisecond accuracy (Figure 1C). After a high-pitched tone (300 msec, 880 Hz) and a variable delay (800–1600 msec), the spectacles opened, revealing the stimulus for 800 msec. After another variable delay (1200–1800 msec), a low-pitched tone (150 msec, 440 Hz) prompted participants to make a response. Participants were asked to rate verbally “How much physical effort would it take to use this specific object according to its normal function?” on a scale from 1 (not effortful; e.g., a teaspoon) to 10 (very effortful; e.g., a hand drill). There were 192 trials per testing session (96 real-world objects and 96 pictures, each presented once). Each session was composed of eight blocks of 24 trials, each with a short break in between blocks. Stimuli were presented in a randomized sequence within each block. Each session began with verbal and written instructions given to the participant, followed by four practice trials. Two to three experimenters were present during each testing session: One monitored the electrophysiological recordings and entered participants' responses, one triggered the stimulus presentation software and controlled the timing of stimulus presentation by operating the rotating drum, and one selected the stimuli for upcoming trials. Black curtains concealed the experimenters and their workspace from participants' view. Stimulus timing and the real-time encoding of events in the electrophysiological recordings were controlled using MATLAB R2016a (The Mathworks, Inc.) and Psychtoolbox 3.0.13 (Kleiner et al., 2007; Brainard, 1997). Each testing session, including EEG electrode preparation, lasted ∼2.5 hr.
Finally, we collected data from an additional five participants (one participated in the main experiment, and four were new observers who did not participate in the main experiment) to test discrimination performance of the real object and picture stimuli when they were presented behind the transparent barrier. After four practice trials, each participant was presented with 20 stimuli behind the barrier: 10 real objects and 10 matched 2-D pictures. The stimuli were presented in random order using the same apparatus and setup as in the main experiment. The participant's task was to state verbally on each trial whether the stimulus was a real object or a picture. All five participants demonstrated 100% accuracy on this task (100 correct responses out of 100 trials).
Data Analysis
Behavioral Analysis
We conducted two analyses of the behavioral data. The first analysis examined the correlation between behavioral effort ratings for each tool on real versus picture trials. Separately for each accessibility (Barrier vs. No-Barrier) condition, effort ratings for each item in each display format were transformed into z scores, and a Pearson correlation analysis was conducted on the paired z scores. Next, we used a two-way repeated-measures within-participant ANOVA to compare effort ratings in each display format and accessibility condition.
EEG Recording and Preprocessing
EEG was recorded with a 128-channel Biosemi ActiveTwo system using 128 head electrodes plus four EOG electrodes and two (i.e., left and right) mastoid electrodes. All analyses were conducted using custom MATLAB scripts that used the EEGLAB (Version 14.1.2) environment (Delorme & Makeig, 2004) running on MATLAB R2017b. The recorded data were imported into EEGLAB, rereferenced to the mastoids' average, resampled at 250 Hz using a polyphaser antialiasing filter, and bandpass filtered (1–100 Hz). Noisy channels were identified using the EEGLAB function clean_rawdata (flat line criterion: 5; channel correlation criterion: 0.85; line noise criterion: 4) and subsequently rejected. Epochs were created from −800 to 2000 msec relative to stimulus onset. Decomposition of the cleaned data by independent component analysis was conducted using AMICA (Leutheuser et al., 2013; Palmer, Kreutz-Delgado, & Makeig, 2011), and the resulting independent component (IC) processes were labeled using ICLabel (Pion-Tonachini, Kreutz-Delgado, & Makeig, 2019; see labeling.ucsd.edu). All ICs that were estimated by ICLabel to account for muscle, eye, heart, line artifacts, or unknown activity were rejected. ICs estimated to primarily account for brain activity, but including eye- or muscle-related activity with more than 15% likelihood, were also rejected. The continuous EEG in each electrode channel was then reconstructed by summing the scalp projections of the nonrejected ICs (i.e., those estimated to account for cortical brain activity).
Event-related Spectral Perturbation Power Analysis
The power of the event-related spectral perturbation (ERSP; Makeig, 1993) was computed using Morlet wavelets as implemented in the EEGLAB function newtimef (window size: 572 msec). The number of cycles in each wavelet was set to increase with frequency beginning with 3 cycles at the lowest frequency (5.9 Hz) up to 12.8 cycles at the highest frequency (50 Hz). To compare the sensorimotor μ rhythm (8–13 Hz) in response to the visual presentation of objects and pictures, we averaged baseline-corrected (time interval from −500 to 0 msec) ERSP power from a cluster of central electrodes (Biosemi Electrodes A1, A2, A3, B1, B2, C15, and C16), in accordance with previous studies measuring the sensorimotor μ rhythm in object perception tasks (Marini et al., 2019; Wamain, Sahaï, Decroix, Coello, & Kalénine, 2018; Wamain et al., 2016; Proverbio, 2012; Perry, Stein, & Bentin, 2011; Perry & Bentin, 2009; Pfurtscheller, Brunner, Schlögl, & Lopes da Silva, 2006). For all ERSP analyses, a divisive baseline was used, in line with a gain model (Grandchamp & Delorme, 2011). Single-trial ERSP power was averaged across electrodes first and then across objects. For illustration purposes, the 10 × log10 transformation was applied to the average of ERSP power across trials before the subtraction between real and picture conditions (the subtraction in log-space corresponds to the division between nonlog power values, and this method is therefore in keeping with our gain model approach). Statistical comparisons across display formats in the central electrode cluster were conducted using cluster-based permutation tests, separately for the Barrier and No-Barrier conditions (n = 10,000; intensity threshold for cluster formation α = .001, cluster size threshold α = .05).
RESULTS
Behavioral Results: Real Objects Are Rated as More Effortful to Use than Pictures Irrespective of Reachability
Average effort-to-use ratings for each tool are displayed in Figure 2A, separately for the No-Barrier and Barrier conditions. As is evident from the figure, individual item ratings were evenly distributed from low-effort (i.e., fork, spoon) to high-effort (i.e., hammer, clamp) tools, and the order of the tools was similar in both accessibility conditions. Effort ratings for the items were strongly correlated for real objects and pictures, in both the Barrier (r = .99, p = .003) and No-Barrier (r = .99, p < .001) conditions. The slope of the least-squares best fit (Figure 2A, red line) was significantly lower than unity (Figure 2A, black line) in the Barrier condition (slope = .965), t(94) = 2.46, p = .016, indicating that higher-effort real tools were rated as being more effortful to use than their corresponding pictures; the slope was likewise lower than unity but not significantly so in the No-Barrier condition (slope = .990), t(94) = 0.69, p = .491.
Next, we analyzed the behavioral effort ratings at the level of participants. Average effort-to-use ratings for the real objects and pictures are displayed in Figure 2B, separately for each accessibility condition. Repeated-measures ANOVA with the factors of Display format (Real vs. Picture) and Accessibility (No-Barrier vs. Barrier) revealed a significant main effect of Display format, F(1, 23) = 6.00, p = .02, indicating that real objects were perceived overall as being more effortful to use than their pictures. There were no other significant main effects or interactions (all ps > .05).
Taken together, these behavioral results indicate that real objects were perceived to be more effortful to use than their corresponding pictures whether or not they were presented behind a barrier.
EEG Results: Stronger μ and β Desynchronization for Real Objects versus Pictures Is Attenuated When a Barrier Is Positioned between Observer and Stimulus
Our analysis of the EEG data focused on contrasting display-format-dependent changes in ERSP (Makeig, 1993) power within central electrodes over the parietal cortex. First, we compared ERSP power for real objects versus pictures during the No-Barrier session where the stimuli were accessible for grasping (Marini et al., 2019). Figure 3A (left) shows time–frequency log power of ERSP for real objects and pictures in the No-Barrier condition. Importantly, stimuli in both display formats elicited robust desynchronization in the μ, lower β, and upper β frequency bands, starting at ∼200 msec post-onset. Figure 3A (right) shows the difference in the average log power between real objects and pictures using the contrast [RealNB − PictureNB]; Table 1 provides a list of all significant clusters. For visualization purposes, the dashed line overlaid on the contrast shown in Figure 3A (right; also Figure 3B and C, right) demarcates the approximate boundary within which μ and β ERD was observed in the Real and Picture conditions, as shown in Figure 3A and B (left, dark blue region). Importantly, we found stronger desynchronization for real objects versus pictures in several time–frequency clusters in the μ frequency band; there were two distinct clusters during the stimulus presentation window (∼250–350 and ∼450–750 msec) and two clusters after stimulus offset (∼850–1000 and ∼1200–1350 msec), one of which (∼850–1000 msec) extended into the lower β (14–18 Hz) frequency band. The time course of stronger μ and lower β desynchronization for real objects versus pictures in our No-Barrier condition matches closely the findings of Marini et al. (2019), confirming that real objects trigger stronger and more prolonged motor preparation signals than do matched pictures of the same items when the stimuli are physically accessible—a real object advantage.
Cluster . | Frequency Band . | Starts at (msec) . | Ends at (msec) . | Frequency Boundary (Lower, in Hz) . | Frequency Boundary (Upper, in Hz) . |
---|---|---|---|---|---|
1 | μ (D) | 245 | 342 | 10.8 | 13.8 |
2 | μ (D) | 450 | 742 | 10.3 | 13.3 |
3 | γ (S) | 678 | 790 | 40.5 | 50 |
4 | μ/β (D) | 870 | 1003 | 6.9 | 20.2 |
5 | β/γ (S) | 1147 | 1183 | 28.6 | 31.1 |
6 | μ (D) | 1191 | 1327 | 6.9 | 10.3 |
7 | β (S) | 1411 | 1499 | 18.7 | 23.7 |
Cluster . | Frequency Band . | Starts at (msec) . | Ends at (msec) . | Frequency Boundary (Lower, in Hz) . | Frequency Boundary (Upper, in Hz) . |
---|---|---|---|---|---|
1 | μ (D) | 245 | 342 | 10.8 | 13.8 |
2 | μ (D) | 450 | 742 | 10.3 | 13.3 |
3 | γ (S) | 678 | 790 | 40.5 | 50 |
4 | μ/β (D) | 870 | 1003 | 6.9 | 20.2 |
5 | β/γ (S) | 1147 | 1183 | 28.6 | 31.1 |
6 | μ (D) | 1191 | 1327 | 6.9 | 10.3 |
7 | β (S) | 1411 | 1499 | 18.7 | 23.7 |
Letters next to frequency bands indicate the direction of the effect (D = desynchronization; S = synchronization).
Next, we compared ERSP power for the same stimuli during the Barrier session. Figure 3B (left) shows the time–frequency log power of ERSP for real objects and pictures in the Barrier session; Figure 3B (right) shows the difference between real object and picture trials in the average log power using the contrast [RealB − PictureB]; Table 2 provides a list of all significant clusters. With the barrier in place, we observed significantly stronger μ rhythm ERD for real objects versus pictures late in the stimulus presentation window (∼650–800 msec) and in three clusters after trial offset (∼900–1300 msec), two of which extended slightly into the lower β band. There was no significant ERD or event-related synchronization in the other frequency bands.
Cluster . | Frequency Band . | Starts at (msec) . | Ends at (msec) . | Frequency Boundary (Lower, in Hz) . | Frequency Boundary (Upper, in Hz) . |
---|---|---|---|---|---|
1 | μ (D) | 666 | 794 | 7.8 | 12.3 |
2 | μ (D) | 898 | 1047 | 10.8 | 14.8 |
3 | μ (D) | 1036 | 1115 | 7.8 | 10.3 |
4 | μ (D) | 1115 | 1311 | 7.8 | 15.2 |
Cluster . | Frequency Band . | Starts at (msec) . | Ends at (msec) . | Frequency Boundary (Lower, in Hz) . | Frequency Boundary (Upper, in Hz) . |
---|---|---|---|---|---|
1 | μ (D) | 666 | 794 | 7.8 | 12.3 |
2 | μ (D) | 898 | 1047 | 10.8 | 14.8 |
3 | μ (D) | 1036 | 1115 | 7.8 | 10.3 |
4 | μ (D) | 1115 | 1311 | 7.8 | 15.2 |
Letters next to frequency bands indicate the direction of the effect (D = desynchronization; S = synchronization).
Finally, to compare directly the temporal dynamics between the two accessibility conditions, we computed the difference in ERSP for real objects versus pictures across the No-Barrier versus Barrier sessions, using the contrast [(RealNB − PictureNB) − (RealB − PictureB)]. The resulting time–frequency ERSP interaction map is shown in Figure 3C; a list of significant clusters is provided in Table 3. The y-axis color scale in Figure 3C reflects the direction of the difference in ERSP between RealNB − PictureNB (Figure 3A, right) and RealB − PictureB (Figure 3B, right), where blue regions reflect greater desynchronization in the No-Barrier (vs. Barrier) session and red areas reflect greater desynchronization in the Barrier (vs. No-Barrier) session. Importantly, for processes associated with μ and low β desynchronization, negative ERSP power change in Figure 3C reflects a stronger real object advantage without a barrier in place, whereas positive ERSP power indicates a stronger real object advantage with the barrier in place. It is evident from Figure 3C that the real object advantage was significantly greater in the No-Barrier versus Barrier session during the period of stimulus presentation as well as early after stimulus offset, as reflected by significant μ (∼450–550 msec) and low β (∼300–350, ∼550–650, and ∼1000–1050 msec) clusters. Later after stimulus offset, however, the real object advantage was comparatively stronger in the Barrier (vs. No-Barrier) session, as evinced by two significant clusters in the μ range (∼1050–1100 and ∼1150–1250 msec). The interaction contrast also revealed several positive clusters that were outside the global region of μ and low β desynchronization (Figure 3A and B, left; Figure 3C, dotted line), reflecting stronger synchronization for real objects versus pictures in the No-Barrier (vs. Barrier) condition (μ: ∼1550–1600 msec; lower β: ∼1425–1475 msec; gamma: 700–800 msec).
Cluster . | Frequency Band . | Starts at (msec) . | Ends at (msec) . | Frequency Boundary (Lower, in Hz) . | Frequency Boundary (Upper, in Hz) . |
---|---|---|---|---|---|
1 | β (D) | 281 | 334 | 16.8 | 21.7 |
2 | μ (D) | 466 | 534 | 9.3 | 13.3 |
3 | β (D) | 554 | 626 | 18.2 | 22.2 |
4 | γ (S) | 694 | 754 | 41 | 47.5 |
5 | γ (S) | 694 | 800 | 32.1 | 40 |
6 | β (D) | 986 | 1055 | 19.7 | 22.2 |
7 | μ (S) | 1055 | 1107 | 8.3 | 9.8 |
8 | μ (S) | 1163 | 1235 | 11.3 | 13.8 |
9 | β (S) | 1431 | 1471 | 19.7 | 22.7 |
10 | μ (S) | 1559 | 1600 | 10.3 | 12.3 |
Cluster . | Frequency Band . | Starts at (msec) . | Ends at (msec) . | Frequency Boundary (Lower, in Hz) . | Frequency Boundary (Upper, in Hz) . |
---|---|---|---|---|---|
1 | β (D) | 281 | 334 | 16.8 | 21.7 |
2 | μ (D) | 466 | 534 | 9.3 | 13.3 |
3 | β (D) | 554 | 626 | 18.2 | 22.2 |
4 | γ (S) | 694 | 754 | 41 | 47.5 |
5 | γ (S) | 694 | 800 | 32.1 | 40 |
6 | β (D) | 986 | 1055 | 19.7 | 22.2 |
7 | μ (S) | 1055 | 1107 | 8.3 | 9.8 |
8 | μ (S) | 1163 | 1235 | 11.3 | 13.8 |
9 | β (S) | 1431 | 1471 | 19.7 | 22.7 |
10 | μ (S) | 1559 | 1600 | 10.3 | 12.3 |
Letters next to frequency bands indicate the direction of the effect (D = desynchronization; S = synchronization).
Taken together, these EEG results can be summarized as follows. Looking at everyday tools that are strongly associated with action automatically elicits robust μ and lower β ERD, whether the stimuli are displayed as real objects or pictures. However, there are differences in the strength of μ and lower β desynchronization across display formats over time, and these differences are modulated by whether or not the stimulus is physically accessible. In the No-Barrier condition, real objects elicited stronger and more prolonged μ desynchronization than did pictures throughout the period of stimulus presentation as well as early after stimulus offset. With a barrier in place, real objects only elicited stronger μ desynchronization than did pictures late in the period of stimulus presentation and after stimulus offset. Direct contrasts confirmed that the real object advantage was significantly stronger in the No-Barrier (vs. Barrier) condition during the window of stimulus presentation and after trial offset, although there was a late amplification in the Barrier condition (>1000 msec after stimulus onset).
DISCUSSION
Previous studies have begun to reveal that real objects elicit unique effects on cognition and behavior compared to pictures of objects, yet the underlying neural mechanisms for these differences are poorly understood. We measured desynchronization of μ and β rhythms over the human dorsal cortex to test whether neural signatures of automatic motor preparation for action are stronger and more sustained for real objects compared to pictures of objects and, critically, to examine whether this pattern is modulated by a barrier that disrupts the immediate graspability of the stimuli. Observers made verbal effort-to-use judgments about everyday tools that were presented within reach, either as real solid objects or as matched high-resolution 2-D pictures of the same items. In separate testing sessions, participants viewed the stimuli unobstructed or behind a large transparent barrier. Analysis of the behavioral rating data revealed that real objects were perceived overall to be more effortful to use than their corresponding pictures, similar to Marini et al. (2019); interestingly however, this was the case whether or not the stimuli were presented behind a barrier. In the analysis of the EEG data, first, looking at both real tools and pictures of tools elicited robust ERD in the μ and low β frequency range, consistent with previous EEG studies (Marini et al., 2019; Wamain et al., 2016; Suzuki et al., 2014; Proverbio, 2012; Proverbio et al., 2011) and with other neuroimaging studies showing selective responses to actionable objects such as tools in the dorsal cortex, without the requirement to perform a manual grasping action (Matić et al., 2020; Proverbio, Azzari, & Adorni, 2013; Konen & Kastner, 2008; Valyear et al., 2007; Creem-Regehr & Lee, 2005; Chao & Martin, 2000). Second, when the stimuli were presented unobstructed and within reach of the observer, real objects triggered stronger and more sustained event-related μ and low β desynchronization than when the stimuli were presented as pictures. This real object advantage in EEG signatures of automatic action preparation was apparent both during the period of stimulus presentation and after stimulus offset, replicating the previously reported temporal patterns observed by Marini et al. (2019). Critically, we predicted that if the real object advantage arises because solid objects lend themselves to in-the-moment grasping (whereas pictures do not), then this effect should be reduced or eliminated by imposition of the barrier. In line with this prediction, we found that when the stimuli appeared behind a barrier, the real object advantage was no longer apparent early during the window of stimulus presentation but began to emerge later in the trial near the time of stimulus offset. A direct contrast of desynchronization differences between real objects and pictures across the two accessibility conditions confirmed that, when the stimuli were presented unobstructed, the real object advantage was significantly stronger during the window of stimulus presentation and early after trial offset, compared to when the stimuli appeared behind a barrier. Taken together, these EEG findings demonstrate that a barrier manipulation has temporally distinct effects on cortical signatures of the real object advantage, whereby early differences are attenuated but more slowly evolving differences remain unaffected. The results are intriguing because they suggest that differences in brain responses to real objects and pictures may arise from distinct causal mechanisms that operate on different time scales.
Our EEG results support behavioral findings in human observers showing that the imposition of a barrier attenuates real object effects on selective attention (Gomez et al., 2018) and on valuation (Bushong et al., 2010). Yet, the present experiment reveals the nuances of the real object advantage that these studies have not, by showing that the barrier predominantly attenuates early (more so than later-evolving) temporal signatures of the effect. Although a barrier has been shown to modulate responses to images of graspable objects (Cardellicchio, Sinigaglia, & Costantini, 2013), the important finding here is that the barrier eliminates early differences in action-related cortical responses to real objects (vs. pictures) that are otherwise apparent when there is no barrier in place. Yet, we did not observe an effect of the barrier in the behavioral measure of effort-to-use perceptual ratings. Other studies have found that a barrier did not influence behavior, as evidenced by a failure to disrupt cross-modal integration in visuotactile extinction patients (Farnè, Demattè, & Làdavas, 2003) and in neurologically healthy observers (Kitagawa & Spence, 2005). However, in these cross-modal attention studies, the visual stimuli were finger movements made by the experimenter, or flashing LED lights (respectively), rather than graspable objects. It is possible that a barrier predominantly affects processing of action-related stimuli, such as tools. Consistent with this idea, Garrido-Vásquez and Schubö (2014) used a probe detection task to show that visual attention to 3-D objects in a virtual reality (VR) environment was allocated preferentially to a graspable object (a mug) versus a visually similar nongraspable object (a spiky cactus), but only when it was perceived to be within reachable space. However, this does not explain why we observed an effect of the barrier in early cortical EEG responses but not in measures of behavior. Given the temporally specific effect of the barrier in our EEG data, the nature of the task and the type of response may have a critical influence on display format and accessibility effects. In our task, participants were required to wait until well after stimulus offset to initiate their behavioral response (to avoid disrupting the EEG recordings), by which time the effect of the barrier on EEG responses had elapsed. In this respect, our EEG results highlight avenues for future research to evaluate whether a barrier has a more powerful effect on tasks that require immediate responses to objects (Gomez et al., 2018) than on tasks that involve a delay between stimulus presentation and response initiation.
Why Does a Transparent Barrier Modulate Early Neural Signatures of the “Real Object Advantage”?
The transparent barrier modulated the accessibility of the stimuli while holding constant their visual characteristics, including their egocentric distance and stereoscopic depth cues. Importantly, the barrier did not interfere with participants' ability to distinguish between the two display formats, as illustrated in Figure 1 and as we confirmed in a separate sample of observers (see Methods). Nor did the barrier influence observers' ability to recognize the tools, as evidenced by the fact that the onset of μ and low β desynchronization for real objects and pictures was comparable in the No-Barrier (Figure 3A, left) and Barrier (Figure 3B, left) conditions.
Combined with the findings from previous behavioral studies involving barriers, described above (Gomez et al., 2018; Bushong et al., 2010), the results of our EEG experiment are consistent with the notion that a barrier disrupts automatic planning of potential in-the-moment motor interactions with real objects. What are the underlying neural mechanisms that could give rise to this effect? In nonhuman primates, visually responsive “object-type” neurons in the anterior intraparietal (AIP) area of dorsal cortex code the shape of geometric solids that are fixated by the animal, within several hundred milliseconds after stimulus onset (Murata, Gallese, Luppino, Kaseda, & Sakata, 2000; Sakata et al., 1998; Sakata, Taira, Murata, & Mine, 1995). AIP in the macaque has strong connections with ventral premotor cortex (area F5; Borra et al., 2008), whose response to geometric solids is attenuated when a transparent barrier is positioned between the animal and the stimulus (Bonini, Maranesi, Livi, Fogassi, & Rizzolatti, 2014; Caggiano, Fogassi, Rizzolatti, Thier, & Casile, 2009). In humans, anatomo-functional connections have also been identified between AIP sulcus (the putative human homologue of monkey AIP) and ventral premotor area (the human homologue of area F5 in the monkey; Davare, Rothwell, & Lemon, 2010). Another region of human dorsal cortex, superior parieto-occipital cortex, automatically responds to reachable solid objects when no reach is performed (Cavina-Pratesi et al., 2006) and is more strongly activated when the solids lie within versus outside the observer's reach (Gallivan, McLean, & Culham, 2011; Cavina-Pratesi et al., 2010; Gallivan et al., 2009). In this context, our EEG findings lend support to an emerging neural framework wherein dorsal regions, possibly in conjunction with ventral premotor areas, represent objects in the current egocentric environment that are viable targets for immediate manual interaction.
An alternative interpretation of the effect of the barrier on EEG responses is that it disrupts processes associated with anticipating the somatomotor consequences of interacting with objects. Real objects provide mechanoreceptive and proprioceptive feedback (together known as haptic feedback) when the fingers come into contact with the surface of the object, whereas images of objects typically do not. Converging evidence from kinematic studies of object grasping suggests that real objects and pictures are processed differently primarily because of the availability of haptic feedback information at the time of the grasp. Specifically, when observers grasp line drawings, or even realistic photographs of 2-D images of objects, just-noticeable-difference (JND) scores at the time of peak grip aperture vary according to the size of the object, indicating that sensitivity to the change in size is not absolute but is relative to the size of the stimulus (consistent with Weber's law; Ozana et al., 2020; Ozana & Ganel, 2019; Holmes & Heath, 2013). However, when haptic feedback is provided at the end of a grasp to a 2-D stimulus, JNDs conform to the absolute size of the stimulus (violating Weber's law; Hosang et al., 2016), similar to the analytic JNDs observed for solid objects (Heath et al., 2011; Ganel et al., 2008). Similarly, JNDs for VR objects adhere to Weber's law, unless haptic feedback is provided at the time of the grasp, in which case JNDs violate Weber's law (Ozana et al., 2020; Ozana, Berman, & Ganel, 2018). Critically, haptic feedback has similar effects on the kinematics of real object grasping: When haptic feedback is denied by instructing participants to end their grasp above the object without actually touching it, JNDs adhere to Weber's law (Ozana & Ganel, 2019). However, if only partial tactile feedback is provided (by overlaying a transparent glass barrier over the top of the to-be-grasped real object so that the feedback is indirect and imprecise), analytic grasping patterns are restored (Ozana & Ganel, 2019). Taken together, these behavioral studies suggest that the cognitive processes that support grasping are governed by whether or not the action terminates with haptic feedback. Evidence from human fMRI suggests that such anticipatory computations may arise in the dorsal cortex. Freud et al. (2018) found that the left AIP sulcus, which is involved in visually guided grasping, differentiates both the format in which an object is presented (real vs. 2-D image) and the type of action performed on the object (i.e., reach vs. grasp). Importantly, these format- and task-selective responses in the dorsal cortex in Freud et al.'s (2018) study were not observed in other motor or somatosensory cortices, and they emerged during the “planning phase” on each trial before the grasping action was initiated, indicating that the representations did not arise from actual motor, proprioceptive, or somatosensory feedback. Rather, it seems that the representations reflected planning or anticipation of the impending grasping action. The authors speculated as to whether the visuomotor system generates a feedforward predictive model of planned actions that incorporates the constraints and tactile consequences of real object interaction (Freud et al., 2018; see also Säfström & Edin, 2008).
Taken together, therefore, the results from these behavioral and fMRI studies of object grasping raise the question of whether the barrier in our EEG study modulated cortical processes associated with anticipating the somatosensory consequences of interacting with the (real) objects. In our study, participants were never required to explicitly perform a grasping action toward, or to come into contact with, the stimuli, so they never explicitly anticipated somatosensory feedback. On the basis of the available evidence from behavioral kinematics reviewed above, the prediction would be that if there is no haptic feedback provided during the task, then the real objects and images should be processed similarly, and there should be no effect of the barrier manipulation—but this is not what we have observed in our EEG (or behavioral) results. The barrier in our study could have modulated anticipation of the potential for somatosensory feedback. After all, the mere expectation of haptic feedback is sufficient to modulate grasping kinematics toward real objects (Whitwell, Katz, Goodale, & Enns, 2020). Nevertheless, it is unclear whether expectations of haptic feedback are sufficient to modulate perception of real objects and pictures when the task does not explicitly involve grasping. For example, similar patterns are evident in studies that have used other experimental approaches, such as those that have examined breakthrough to awareness for objects that have been rendered unconscious using continuous flash suppression. Using this approach, real objects break through into awareness sooner than do colored 2-D photographs or computer images of objects, although there is no requirement for the observer to act upon or contact the stimuli (Mudrik & Korisky, 2020; Korisky, Hirschhorn, & Mudrik, 2019). Yet, somatosensory feedback can influence breakthrough: When an observer is asked to manually rotate a platform upon which is presented a highly realistic VR or augmented reality (AR) object, breakthrough to awareness of the virtual stimulus is faster when the visual rotation of the virtual object is coupled with the live sensorimotor act of rotating the platform, compared to when the motor action is paired with prerecorded visual information that is mismatched in movement timing (Suzuki, Schwartzman, Augusto, & Seth, 2019). A fruitful direction for future research will be to determine whether haptic feedback governs the processes that are brought to bear on real objects and images more so during action- versus perception-related tasks.
Nevertheless, our findings align with recent arguments that the dorsal cortex processes visual object attributes with respect to the constraints they impose for perception in the service of action—a view that contrasts with traditional frameworks that ascribe a more monolithic role for the dorsal cortex that focuses solely on action (Freud et al., 2020).
What Do More Slowly Evolving Neural, and Behavioral, Signatures of the “Real Object Advantage” Represent?
The barrier manipulation in our study attenuated early, but not later-evolving, desynchronization differences across display formats. This late EEG signature of the real object advantage mirrors the pattern we observed in the behavioral data, in which effort-to-use ratings were higher overall for real objects than for 2-D images but were also unaffected by the barrier. As we highlighted earlier, participants in our task were required to wait until after stimulus offset to initiate their behavioral responses—well after the effect of the barrier on EEG responses had expired. However, this raises the question of what the more slowly evolving neural and behavioral signatures of the real object advantage represent. Marini et al. (2019), provided persuasive evidence that the late EEG signature of the real object advantage did not reflect differences in stereoscopic cues conveyed by real objects versus 2-D pictures, by showing that early visual ERP amplitude differences, which are typically associated with stereopsis, did not modulate display-format-related cortical brain dynamics early or late in the event-related time courses. In addition, as we discussed earlier, it is unclear whether predictions about the somatosensory consequences of interacting with real objects can influence perception when the observer is not required to perform a manual grasp toward the stimulus.
The more slowly evolving cortical amplification in μ and β rhythm desynchronization for real objects versus pictures, and the overall behavioral differences across display formats in effort ratings, could reflect processing of inherent physical characteristics of real objects (whether or not they are grasped). After all, even when environmental constraints limit the accessibility of real objects, the objects themselves are fundamentally tangible solids that have a definite egocentric distance, real-world size, surface texture, weight, and compliance (unlike pictures that do not have these qualities). Real objects, when viewed with two eyes, convey unambiguous information about their egocentric distance, and therefore, their physical size is known; conversely, for pictures, whereas the distance of the display surface is known, the distance to the object depicted within the picture is not, and thus, the size of the object is ambiguous. Distance and size are important cues that guide actions toward objects as well as object perception. In neuropsychological patients with visual agnosia who may rely predominantly on object-related processes in the dorsal cortex (because of lesions of object areas in the ventral cortex), the display format and size of the stimulus can have profound effects on recognition. Although patients with agnosia have difficulty recognizing pictures of objects, they can show a surprising preservation in their ability to recognize real objects (Farah, 2004; Turnbull et al., 2004; Chainay & Humphreys, 2001; Riddoch & Humphreys, 1987; Ratcliff & Newcombe, 1982). Importantly, whether patients with agnosia show a real object advantage in recognition depends on the physical size of the stimulus (Holler et al., 2019); recognition is best when the object's physical size matches the typical real-world size, whereas performance is impaired (and similar to pictures) when physical size is larger or smaller than the real-world size. Analogous effects of familiar size and display format are evident in looking behavior in infants (Sensoy et al., 2021). In nonhuman primates, size coding of solid objects has been observed in neural populations in the dorsal cortex (Murata et al., 2000; Sakata et al., 1995; Taira, Mine, Georgopoulos, Murata, & Sakata, 1990). These findings, together with our EEG results, support recent arguments that the dorsal cortex may be specialized for processing the shape and size of solid objects (Freud et al., 2020; Fabbri, Stubbs, Cusack, & Culham, 2016).
The notion that real objects are more likely than 2-D pictures to be processed with respect to enduring physical characteristics, such as size and weight, is further supported by behavioral evidence in adult observers. Holler et al. (2020) examined whether the spatial arrangements of objects in a sorting task were similar or different for stimuli that were presented in different display formats. Two-dimensional pictures were predominantly arranged with respect to their typical location (a semantic characteristic), whereas the real objects were processed using a richer framework that incorporated both semantic and physical characteristics, such as their real-world size and weight (Holler et al., 2020). In the study by Holler et al. (2020), however, the 2-D pictures were scaled to fit a 27-in. computer monitor, whereas in the current study, the 2-D pictures were matched in retinal size to their real-object counterparts. If the “late” EEG signature of the real object advantage observed here indeed reflects amplified processing of real objects (vs. pictures) because of the presence of physical size cues, it is noteworthy that the effects on cortical responses are apparent without scaling of the pictures. VR and AR stimuli, which can be arbitrarily scaled in size and are more similar to real objects than are 2-D pictures with respect to their inherent actionability, may provide a promising avenue for disentangling which object properties contribute to the late-evolving real object advantage. Oddly, however, AR stimuli are processed according to their anticipated weight, despite the fact that they have no mass (Holler et al., 2020). Object mass seems to be represented alongside other physical variables in the frontoparietal cortex, which anticipates the dynamics of objects, automatically and independently from the ventral cortex (Schwettmann, Tenenbaum, & Kanwisher, 2019). The extent to which such representations remain invariant or are malleable across changes in stimulus format remains an important question for future research (Fairchild & Snow, 2020).
Previous studies of the anatomical substrates of the μ rhythm suggest that μ rhythm signals in the range of ∼8–13 Hz originate from the primary somatosensory cortex for the hand, whereas the concomitant signals in the β range originate from the primary motor cortex (Hari & Salmelin, 1997; Salmelin, Hämäläinen, Kajola, & Hari, 1995). Both μ and β ERD are associated with the observation and planning of manual action, but β ERD is especially modulated by the egocentric position of the stimulus (Angelini et al., 2018). Indeed, we found that introduction of the barrier (which precludes manual interaction but does not change objects' tactile properties) seemed to reduce the observed β desynchronization more so than the μ desynchronization. Therefore, it may be that the higher-frequency ERD differences we observed for real objects (vs. pictures) originate in the primary motor cortex and reflect real objects' potential for manual interaction, whereas the lower-frequency ERD differences originate in the somatosensory cortex and reflect real objects' anticipated haptic properties. Future studies could use source localization to test whether different cortical regions are responsible for different frequency components of the observed ERD. If the observed μ and β ERD do in fact originate from the primary somatosensory and primary motor cortices, respectively, this could help to elucidate the contributions toward the real object advantage made by real objects' potential for manual interaction versus their potential for haptic feedback. Source localization could also reveal possible lateralization of the effects reported here. After all, left-lateralized biases have been demonstrated not only in the context of μ desynchronization with action-related stimuli such as tools (Proverbio, 2012), and actionable real objects (Marini et al., 2019) but also in ERP, fMRI, and transcranial magnetic stimulation studies (Cardellicchio et al., 2011, 2013; Proverbio et al., 2011; Creem-Regehr & Lee, 2005). Comparing lateralization of the different frequency components (and whether any such lateralization interacts with a barrier manipulation) could help to clarify the ways in which actionability and haptic features may both contribute to the real object advantage in different conditions of accessibility, at different temporal intervals, and in different frequency bands.
Conclusion
Taken together, our findings suggest that real objects drive dorsal visuomotor brain networks involved in automatic action planning more strongly than pictures of objects do because of the potential they offer for immediate interaction. Nevertheless, whereas in-the-moment graspability accounts for early EEG signatures of the real object advantage, analogous signatures at later time points may reflect processing of inherent characteristics of real objects that distinguish them from pictures, such as their real-world size, weight, or tactile features. Identifying the characteristics of real objects that perpetuate behavioral and neural signatures of the real object advantage represents an important avenue for future research. These findings underscore how an “immersive neuroscience” approach to cognitive neuroscience (Snow & Culham, 2021) can reveal surprising nuances in perception and representation of experimental proxies for reality versus actual real-world stimuli.
Acknowledgments
This work was supported by grants from the National Science Foundation (grant number 1632849 to J. C. S.), the National Eye Institute of the National Institutes of Health (NIH; grant number R01EY026701 to J. C. S.), and the National Institute of General Medical Sciences of the NIH (grant number P20 GM103650). We thank Katherine Breeding, Arunima Chakraborty, and Sarah Olin for assistance with data collection.
Reprint requests should be sent to Jacqueline C. Snow, Department of Psychology, University of Nevada, Reno, 1664 N. Virginia Street, Mail Stop 296, Reno, NV 89557, or via e-mail: [email protected].
Author Contributions
Grant T. Fairchild: Data curation; Formal analysis; Investigation; Methodology; Project administration; Resources; Software; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Francesco Marini: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Jacqueline C. Snow: Conceptualization; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Supervision; Visualization; Writing—Original draft; Writing—Review & editing.
Funding Information
Jacqueline C. Snow, National Eye Institute (https://dx.doi.org/10.13039/100000053), grant number: R01EY026701. Jacqueline C. Snow, Office of Integrative Activities (https://dx.doi.org/10.13039/100000106), grant number: 1632849. Institutional grant, National Institute of General Medical Sciences (https://dx.doi.org/10.13039/100000057), grant number: P20GM103650. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF or NIH.
Diversity in Citation Practices
A retrospective analysis of the citations in every article published in this journal from 2010 to 2020 has revealed a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .408, W(oman)/M = .335, M/W = .108, and W/W = .149, the comparable proportions for the articles that these authorship teams cited were M/M = .579, W/M = .243, M/W = .102, and W/W = .076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance. The authors of this article report its proportions of citations by gender category to be as follows: M/M = .406, W/M = .139, M/W = .158, and W/W = .297.
REFERENCES
Author notes
Denotes co-first author contribution.