The cognitive and neural bases of visual perception are typically studied using pictures rather than real-world stimuli. Unlike pictures, real objects are actionable solids that can be manipulated with the hands. Recent evidence from human brain imaging suggests that neural responses to real objects differ from responses to pictures; however, little is known about the neural mechanisms that drive these differences. Here, we tested whether brain responses to real objects versus pictures are differentially modulated by the “in-the-moment” graspability of the stimulus. In human dorsal cortex, electroencephalographic responses show a “real object advantage” in the strength and duration of mu (μ) and low beta (β) rhythm desynchronization—well-known neural signatures of visuomotor action planning. We compared desynchronization for real tools versus closely matched pictures of the same objects, when the stimuli were positioned unoccluded versus behind a large transparent barrier that prevented immediate access to the stimuli. We found that, without the barrier in place, real objects elicited stronger μ and β desynchronization compared to pictures, both during stimulus presentation and after stimulus offset, replicating previous findings. Critically, however, with the barrier in place, this real object advantage was attenuated during the period of stimulus presentation, whereas the amplification in later periods remained. These results suggest that the “real object advantage” is driven initially by immediate actionability, whereas later differences perhaps reflect other, more inherent properties of real objects. The findings showcase how the use of richer multidimensional stimuli can provide a more complete and ecologically valid understanding of object vision.

The cognitive and neural mechanisms of object vision in humans have been studied predominantly using artificial proxies for reality, typically in the form of 2-D pictures of objects. Experimenters have relied on pictures as stimuli because they are convenient to obtain, easy to manipulate for lower-level parameters such as luminance and size, and straightforward to control temporally. Nevertheless, if the ultimate aim of this research is to understand naturalistic vision, it is crucial that we test whether the behaviors and underlying neural mechanisms triggered by artificial proxies for reality are comparable with those of their real-world counterparts (Snow & Culham, 2021).

There are philosophical and empirical reasons to expect that viewing pictures may engage different cognitive and neural processes than viewing real-world objects (Snow & Culham, 2021; Freud, Behrmann, & Snow, 2020; Erlikhman, Caplovitz, Gurariy, Medina, & Snow, 2018). Perhaps most importantly, unlike pictures, real objects are tangible solids that can be grasped and manipulated with the hands. Actionability—the potential to act meaningfully with real objects, such as tools—develops rapidly during the early years of life and is critical for survival (Simcock & DeLoache, 2008; Pierroutsakos & DeLoache, 2003). Although pictures of objects, especially tools, can be associated with actions, or concepts of actions (sometimes referred to as “affordances”), the pictures themselves are not inherently graspable and they cannot be used to manipulate the environment in the same way that real objects can.

Behavioral and neuroimaging studies have begun to compare real object versus picture processing while carefully controlling important stimulus attributes such as appearance and timing (Romero & Snow, 2019). Emerging evidence from these studies suggests that real objects are processed differently than their picture proxies, and these differences are evident across a range of cognitive domains. For example, compared to pictures, real objects have been shown to bias perception (Holler, Fabbri, & Snow, 2020; Romero, Compton, Yang, & Snow, 2018), capture attention (Gomez, Skiba, & Snow, 2018), bolster memory (Snow, Skiba, Coleman, & Berryhill, 2014), alter gaze patterns in infants (Sensoy, Culham, & Schwarzer, 2021), facilitate recognition in neuropsychological patients (Holler, Behrmann, & Snow, 2019; Farah, 2004; Turnbull, Driver, & McCarthy, 2004; Chainay & Humphreys, 2001; Riddoch & Humphreys, 1987; Ratcliff & Newcombe, 1982), and modulate higher-level cognitive processes such as valuation (Romero et al., 2018; Bushong, King, Camerer, & Rangel, 2010), social cognition (Risko, Laidlaw, Freeth, Foulsham, & Kingstone, 2012; Kingstone, 2009), and executive function (Beaucage, Skolney, Hewes, & Vongpaisal, 2020).

Stimulus format influences not only cognition but also goal-directed actions. For example, when adult observers are asked to grasp 2-D images of objects, grip aperture varies relative to the size of the stimulus, in agreement with the psychophysical principles of Weber's law (Ozana, Namdar, & Ganel, 2020; Ozana & Ganel, 2019; Hosang, Chan, Jazi, & Heath, 2016; Holmes & Heath, 2013). The same is not the case, however, for real objects for which grip aperture conforms to the absolute metrics of the stimulus, irrespective of its size, thus evading perceptual biases and producing more accurate and efficient visuomotor performance (Heath, Mulla, Holmes, & Smuskowitz, 2011; Ganel, Chajut, & Algom, 2008). Evidence from fMRI also shows that grasping of real objects leads to different patterns of brain activation compared to picture grasping (Freud et al., 2018).

Differences in cognition and action between real objects and pictures are, in fact, apparent early during development, and they appear to be closely related to children's sensorimotor experience with objects. For example, 7-month-old infants show a visual preference for real toys versus 2-D pictures of toys (Gerhard, Culham, & Schwarzer, 2016; DeLoache, Strauss, & Maynard, 1979), and this preference is related to the amount of time spent handling the object: The more infants tend to move their fingers over a toy during a manual exploration task, the stronger the real object preference (Gerhard, Culham, & Schwarzer, 2021). Although infants at 9 months old often try to grasp pictures of toys off the page (DeLoache, Pierroutsakos, Uttal, Rosengren, & Gottlieb, 1998), more realistic pictures tend to elicit more attempts at interaction than do less realistic pictures (Pierroutsakos & DeLoache, 2003), and as infants gain experience with pictures, they become less inclined to grasp depicted objects and more inclined to make communicative gestures, like pointing actions, toward them (Pierroutsakos & Troseth, 2003). Thus, sensorimotor interaction with real objects helps infants to learn and understand the differences between real objects and pictures, despite resemblances in their appearance (Gerhard et al., 2021; Sensoy et al., 2021; Adolph, Eppler, & Gibson, 1993; Gibson, 1988).

These unique effects of real objects (vs. pictures) on behavior and brain responses have sometimes been referred to as a “real-exposure effect” (Bushong et al., 2010) or “real object advantage” (Gerhard et al., 2021; Snow & Culham, 2021; Holler et al., 2019; Chainay & Humphreys, 2001). One potential mechanism for the real object advantage, which follows from the behavioral and neuroimaging evidence reviewed above, and which stems from the fact that real objects are actionable whereas pictures are not, is that real objects invoke visuomotor brain networks involved in automatic action planning more so than images do (Marini, Breeding, & Snow, 2019; Gomez & Snow, 2017; Gallivan, Cavina-Pratesi, & Culham, 2009). Previous studies have established that regions of the human dorsal cortex are specialized for processing pictures of action-relevant stimuli, such as graspable objects and tools, without the performance of any action (e.g., Matić, Op de Beeck, & Bracci, 2020; Cardellicchio, Sinigaglia, & Costantini, 2011; Konen & Kastner, 2008; Valyear, Cavina-Pratesi, Stiglick, & Culham, 2007; Creem-Regehr & Lee, 2005; Chao & Martin, 2000). The idea that real objects might invoke these networks more so than images do is supported by behavioral evidence from studies that have manipulated the actionability of real objects by placing a transparent barrier between the observer and the stimulus. For example, Gomez et al. (2018) found that real objects had a stronger influence on attention compared to matched 2-D and 3-D stereoscopic pictures of the same items, but only when the stimuli were accessible for grasping; the effect disappeared when the stimuli were presented out of reach or behind a transparent barrier. Similarly, imposition of a barrier eliminates the increased monetary value ascribed to real objects, such as snack foods and trinkets, versus their pictures (Bushong et al., 2010).

Although previous fMRI (Freud et al., 2018; Snow et al., 2011) and electroencephalography (EEG; Marini et al., 2019) studies have revealed differences in brain responses to real objects versus matched pictures, none of these studies explored whether brain responses are modulated by imposition of a barrier. EEG is an ideal technique to address this question because it can provide fine-grained information about the time course of cortical dynamics of networks recruited during automatic action planning, as measured by stimulus-driven electrical changes at the surface of the scalp (Makeig et al., 2002). In particular, the EEG signal can be decomposed to reveal frequency-specific changes associated with different cognitive processes (Başar, Başar-Eroğlu, Karakaş, & Schürmann, 1999; Klimesch, 1999), one of which is desynchronization of the mu (μ) rhythm (8–13 Hz)—a signal associated with the transformation of visual object information into action representations. Desynchronization, including that of α, μ, and β rhythms, is a correlate of activated cortical networks (Pfurtscheller, 2001) and is related to fMRI response amplitude (Laufs et al., 2003). The μ rhythm originates in the primary sensorimotor and premotor cortices and is recorded over central electrodes (Pfurtscheller, Neuper, Andrew, & Edlinger, 1997). Although the μ rhythm is most commonly defined as being concentrated around 8–13 Hz in the alpha band, it is sometimes characterized as having a second component (15–30 Hz) in the beta band (Angelini et al., 2018; Festante et al., 2018; Pineda, 2005). Event-related desynchronization (ERD) of both μ and low β rhythms is elicited during actual and observed actions with the hands (Festante et al., 2018; Bizovičar, Dreo, Koritnik, & Zidar, 2014; Zaepffel, Trachel, Kilavik, & Brochier, 2013; Hari, 2006; Pineda, 2005; Muthukumaraswamy & Johnson, 2004; Pfurtscheller et al., 1997). Importantly, μ desynchronization is also elicited when observers look at pictures of objects, such as tools, that are strongly associated with hand actions (Wamain, Gabrielli, & Coello, 2016; Suzuki, Noguchi, & Kakigi, 2014; Proverbio, 2012; Proverbio, Adorni, & D'Aniello, 2011).

Leveraging the previous EEG results showing μ rhythm desynchronization in response to pictures of tools, Marini et al. (2019) used EEG to measure cortical responses when participants made perceptual judgments about real tools versus matched pictures of the same items. In line with the prediction that real objects should invoke visuomotor brain networks involved in automatic action planning more so than pictures do, the authors found that on randomly interleaved trials in which real objects were shown, μ and β rhythm ERD was stronger and more sustained in comparison to trials in which pictures were displayed. Interestingly, the amplification in μ and β responses to real objects (vs. pictures) was apparent at distinct time points on each trial: One set of clusters emerged rapidly after stimulus onset and during the period of stimulus presentation, whereas subsequent clusters emerged after stimulus offset. This global temporal pattern in EEG signatures of the real object advantage led the authors to speculate as to whether the “early” and “late” effects reflected the contribution of different cortical mechanisms (Marini et al., 2019), but ultimately, the study left open the question of whether the real object advantage was because of actionability or some other difference between the real objects and pictures.

Here, advancing on the work of Marini et al. (2019), we tested whether the EEG signature of the real object advantage is modulated by the presence of a transparent barrier that prevents in-the-moment graspability of the stimuli. We recorded EEG while observers made effort-to-use judgments about everyday tools. The stimuli were presented either as real objects or as high-resolution 2-D pictures of the same items that were matched closely for retinal size, background, color, and illumination. The stimuli were presented on a custom-built rotating drum so that the order of the real objects and pictures was randomly interleaved across trials. Critically, in separate testing sessions, the stimuli were presented either unobstructed and within reach of the observer or behind a transparent barrier that was positioned between the observer and the stimulus. Importantly, whereas the barrier imposes an environmental constraint that disrupts in-the-moment graspability, other attributes such as the egocentric distance and visual appearance of the stimuli remain identical across conditions. Nevertheless, although the barrier prevents in-the-moment grasping of the (real) objects, it does not alter the fact that real objects are inherently graspable (whereas pictures are not).

On the basis of previous studies, we expected that looking at everyday tools would elicit robust event-related μ and low β rhythm desynchronization (Wamain et al., 2016; Suzuki et al., 2014; Proverbio, 2012; Proverbio et al., 2011) and that tools displayed as real objects would trigger stronger and more sustained ERD than do matched pictures of the same objects when there is no barrier in place (Marini et al., 2019). Critically, we predicted that, if the real object advantage arises because solids lend themselves to in-the-moment grasping whereas pictures do not, then imposition of a barrier should reduce or eliminate the stronger event-related μ and β desynchronization signals for real objects versus pictures.

Experimental Procedure

Participants

Thirty-two healthy, right-handed University of Nevada, Reno, students volunteered for the experiment. Data from eight participants could not be included in the final analyses either because of technical issues or because the participant did not return for the second session. This left 24 participants (nine men) to be included in the final analyses, which equaled the sample size used by Marini et al. (2019). Mean ± SD age of the participants was 21.4 ± 4.8 years (n = 23); one participant did not provide age demographics. All participants reported normal or corrected-to-normal vision, reported no history of neurological impairments, and provided written and oral informed consent as required by the University of Nevada, Reno, institutional review board.

Setup and Stimuli

The stimuli and display setup used in this study are identical to those used by Marini et al. (2019), with the exception of the introduction of a barrier condition, which is described in detail below.

The experimental stimuli consisted of 96 real-world objects and 96 2-D printed photographs of the same items (Figure 1A). The stimulus set included 16 types of garage tools and 16 types of kitchen tools, with three different exemplars of each. All stimuli were presented with the handle facing rightward. The photographs were printed in high resolution on matte paper and mounted on 10 × 14 in. matte black foam-core boards. Small magnets were fixed to the rear side of each real object and each foam-core board to enable them to be mounted to the presentation apparatus.

Figure 1.

(A) Photographs of one of the stimuli used in the experiment (basting brush), shown both as a picture (left) and a real object (right), in the No-Barrier (top) and Barrier (bottom) conditions. The stimulus is photographed as it appears when it is mounted in the presentation apparatus. Although the transparent barrier was visible to the participants, it did not alter the appearance of the stimuli placed behind it. (B) Photographs of a 2-D picture and real object (in the No-Barrier condition) taken from an oblique viewing angle to emphasize their 3-D differences; when viewed from front-on (as shown in A), the real objects and their corresponding picture stimuli were closely matched for apparent size, distance, color, illumination, and background. (C) Schematic showing trial sequence and timing of events. Liquid-crystal occlusion spectacles were used to control stimulus viewing time. After a variable-duration intertrial interval (ITI), an auditory tone signaled trial onset. After a delay (800–1600 msec), the glasses opened (transparent state) for 800 msec. The glasses then closed (opaque state), and after a delay (1200–1800 msec), a tone prompted the participant to make a response. The task was to rate verbally on a scale from 1 to 10 how effortful it would be to use the object according to its typical function.

Figure 1.

(A) Photographs of one of the stimuli used in the experiment (basting brush), shown both as a picture (left) and a real object (right), in the No-Barrier (top) and Barrier (bottom) conditions. The stimulus is photographed as it appears when it is mounted in the presentation apparatus. Although the transparent barrier was visible to the participants, it did not alter the appearance of the stimuli placed behind it. (B) Photographs of a 2-D picture and real object (in the No-Barrier condition) taken from an oblique viewing angle to emphasize their 3-D differences; when viewed from front-on (as shown in A), the real objects and their corresponding picture stimuli were closely matched for apparent size, distance, color, illumination, and background. (C) Schematic showing trial sequence and timing of events. Liquid-crystal occlusion spectacles were used to control stimulus viewing time. After a variable-duration intertrial interval (ITI), an auditory tone signaled trial onset. After a delay (800–1600 msec), the glasses opened (transparent state) for 800 msec. The glasses then closed (opaque state), and after a delay (1200–1800 msec), a tone prompted the participant to make a response. The task was to rate verbally on a scale from 1 to 10 how effortful it would be to use the object according to its typical function.

Close modal

The presentation apparatus was a custom-built four-sided rotating drum (22 × 14 in.) positioned on a small table (Figure 1B). The stimuli were affixed to opposite sides of the drum. This allowed experimenters to exchange a previously displayed stimulus (n − 1) on one side of the drum with a new stimulus (n + 1), while the participant viewed the current stimulus (n) on the opposite side. The side of the drum facing the participant was illuminated by striplight LEDs that were mounted around the inside periphery of a rectangular frame positioned in front of the drum.

In half of the sessions, a transparent barrier was interposed between the participant and the stimuli. The acrylic glass barrier material was selected to minimize glare and reflections. The barrier could be easily detached or reattached to the front of the frame housing the LEDs in between testing sessions. The barrier's outer frame measured 24.4 in. in height and 18.8 in. in width. The aperture within this frame, through which stimuli were visible, measured 20.8 in. in height and 13.9 in. in width.

Behavioral Paradigm

Participants completed the experiment across two separate sessions that differed in stimulus accessibility. One session consisted of an exact replication of Marini et al.'s (2019) study, which we refer to as the “No-Barrier” condition. In a separate session, which we refer to as the “Barrier” condition, the transparent acrylic barrier was placed between the participant and the stimuli mounted on the rotating drum. The Barrier and No-Barrier testing sessions took place on different days, with the order of sessions counterbalanced across participants. We did not point out the presence or absence of the barrier to participants. The stimuli, task, and experimental protocol were otherwise identical across sessions.

Participants were seated 16 in. away from the rotating drum with head and eye height aligned to the center of the front face of the drum. Participants wore earphones throughout the experiment to attenuate ambient noise. Stimulus presentation and trial timing were regulated using computer-controlled visual occlusion spectacles (PLATO, Translucent Technologies Inc.), which can be switched between “closed” (opaque) and “open” (transparent) states with millisecond accuracy (Figure 1C). After a high-pitched tone (300 msec, 880 Hz) and a variable delay (800–1600 msec), the spectacles opened, revealing the stimulus for 800 msec. After another variable delay (1200–1800 msec), a low-pitched tone (150 msec, 440 Hz) prompted participants to make a response. Participants were asked to rate verbally “How much physical effort would it take to use this specific object according to its normal function?” on a scale from 1 (not effortful; e.g., a teaspoon) to 10 (very effortful; e.g., a hand drill). There were 192 trials per testing session (96 real-world objects and 96 pictures, each presented once). Each session was composed of eight blocks of 24 trials, each with a short break in between blocks. Stimuli were presented in a randomized sequence within each block. Each session began with verbal and written instructions given to the participant, followed by four practice trials. Two to three experimenters were present during each testing session: One monitored the electrophysiological recordings and entered participants' responses, one triggered the stimulus presentation software and controlled the timing of stimulus presentation by operating the rotating drum, and one selected the stimuli for upcoming trials. Black curtains concealed the experimenters and their workspace from participants' view. Stimulus timing and the real-time encoding of events in the electrophysiological recordings were controlled using MATLAB R2016a (The Mathworks, Inc.) and Psychtoolbox 3.0.13 (Kleiner et al., 2007; Brainard, 1997). Each testing session, including EEG electrode preparation, lasted ∼2.5 hr.

Finally, we collected data from an additional five participants (one participated in the main experiment, and four were new observers who did not participate in the main experiment) to test discrimination performance of the real object and picture stimuli when they were presented behind the transparent barrier. After four practice trials, each participant was presented with 20 stimuli behind the barrier: 10 real objects and 10 matched 2-D pictures. The stimuli were presented in random order using the same apparatus and setup as in the main experiment. The participant's task was to state verbally on each trial whether the stimulus was a real object or a picture. All five participants demonstrated 100% accuracy on this task (100 correct responses out of 100 trials).

Data Analysis

Behavioral Analysis

We conducted two analyses of the behavioral data. The first analysis examined the correlation between behavioral effort ratings for each tool on real versus picture trials. Separately for each accessibility (Barrier vs. No-Barrier) condition, effort ratings for each item in each display format were transformed into z scores, and a Pearson correlation analysis was conducted on the paired z scores. Next, we used a two-way repeated-measures within-participant ANOVA to compare effort ratings in each display format and accessibility condition.

EEG Recording and Preprocessing

EEG was recorded with a 128-channel Biosemi ActiveTwo system using 128 head electrodes plus four EOG electrodes and two (i.e., left and right) mastoid electrodes. All analyses were conducted using custom MATLAB scripts that used the EEGLAB (Version 14.1.2) environment (Delorme & Makeig, 2004) running on MATLAB R2017b. The recorded data were imported into EEGLAB, rereferenced to the mastoids' average, resampled at 250 Hz using a polyphaser antialiasing filter, and bandpass filtered (1–100 Hz). Noisy channels were identified using the EEGLAB function clean_rawdata (flat line criterion: 5; channel correlation criterion: 0.85; line noise criterion: 4) and subsequently rejected. Epochs were created from −800 to 2000 msec relative to stimulus onset. Decomposition of the cleaned data by independent component analysis was conducted using AMICA (Leutheuser et al., 2013; Palmer, Kreutz-Delgado, & Makeig, 2011), and the resulting independent component (IC) processes were labeled using ICLabel (Pion-Tonachini, Kreutz-Delgado, & Makeig, 2019; see labeling.ucsd.edu). All ICs that were estimated by ICLabel to account for muscle, eye, heart, line artifacts, or unknown activity were rejected. ICs estimated to primarily account for brain activity, but including eye- or muscle-related activity with more than 15% likelihood, were also rejected. The continuous EEG in each electrode channel was then reconstructed by summing the scalp projections of the nonrejected ICs (i.e., those estimated to account for cortical brain activity).

Event-related Spectral Perturbation Power Analysis

The power of the event-related spectral perturbation (ERSP; Makeig, 1993) was computed using Morlet wavelets as implemented in the EEGLAB function newtimef (window size: 572 msec). The number of cycles in each wavelet was set to increase with frequency beginning with 3 cycles at the lowest frequency (5.9 Hz) up to 12.8 cycles at the highest frequency (50 Hz). To compare the sensorimotor μ rhythm (8–13 Hz) in response to the visual presentation of objects and pictures, we averaged baseline-corrected (time interval from −500 to 0 msec) ERSP power from a cluster of central electrodes (Biosemi Electrodes A1, A2, A3, B1, B2, C15, and C16), in accordance with previous studies measuring the sensorimotor μ rhythm in object perception tasks (Marini et al., 2019; Wamain, Sahaï, Decroix, Coello, & Kalénine, 2018; Wamain et al., 2016; Proverbio, 2012; Perry, Stein, & Bentin, 2011; Perry & Bentin, 2009; Pfurtscheller, Brunner, Schlögl, & Lopes da Silva, 2006). For all ERSP analyses, a divisive baseline was used, in line with a gain model (Grandchamp & Delorme, 2011). Single-trial ERSP power was averaged across electrodes first and then across objects. For illustration purposes, the 10 × log10 transformation was applied to the average of ERSP power across trials before the subtraction between real and picture conditions (the subtraction in log-space corresponds to the division between nonlog power values, and this method is therefore in keeping with our gain model approach). Statistical comparisons across display formats in the central electrode cluster were conducted using cluster-based permutation tests, separately for the Barrier and No-Barrier conditions (n = 10,000; intensity threshold for cluster formation α = .001, cluster size threshold α = .05).

Behavioral Results: Real Objects Are Rated as More Effortful to Use than Pictures Irrespective of Reachability

Average effort-to-use ratings for each tool are displayed in Figure 2A, separately for the No-Barrier and Barrier conditions. As is evident from the figure, individual item ratings were evenly distributed from low-effort (i.e., fork, spoon) to high-effort (i.e., hammer, clamp) tools, and the order of the tools was similar in both accessibility conditions. Effort ratings for the items were strongly correlated for real objects and pictures, in both the Barrier (r = .99, p = .003) and No-Barrier (r = .99, p < .001) conditions. The slope of the least-squares best fit (Figure 2A, red line) was significantly lower than unity (Figure 2A, black line) in the Barrier condition (slope = .965), t(94) = 2.46, p = .016, indicating that higher-effort real tools were rated as being more effortful to use than their corresponding pictures; the slope was likewise lower than unity but not significantly so in the No-Barrier condition (slope = .990), t(94) = 0.69, p = .491.

Figure 2.

(A) Subjective effort ratings for each tool as a function of display format, shown separately for the No-Barrier (left) and Barrier (right) conditions. Red line represents the least-squares best fit; black line shows the unity slope line. (B) Average effort ratings across participants, shown separately for each display format (real objects: red; pictures: blue) and accessibility condition (No-Barrier: left; Barrier: right). Squares represent group averages, error bars represent the standard error of the mean, and crosses represent mean values for individual participants.

Figure 2.

(A) Subjective effort ratings for each tool as a function of display format, shown separately for the No-Barrier (left) and Barrier (right) conditions. Red line represents the least-squares best fit; black line shows the unity slope line. (B) Average effort ratings across participants, shown separately for each display format (real objects: red; pictures: blue) and accessibility condition (No-Barrier: left; Barrier: right). Squares represent group averages, error bars represent the standard error of the mean, and crosses represent mean values for individual participants.

Close modal

Next, we analyzed the behavioral effort ratings at the level of participants. Average effort-to-use ratings for the real objects and pictures are displayed in Figure 2B, separately for each accessibility condition. Repeated-measures ANOVA with the factors of Display format (Real vs. Picture) and Accessibility (No-Barrier vs. Barrier) revealed a significant main effect of Display format, F(1, 23) = 6.00, p = .02, indicating that real objects were perceived overall as being more effortful to use than their pictures. There were no other significant main effects or interactions (all ps > .05).

Taken together, these behavioral results indicate that real objects were perceived to be more effortful to use than their corresponding pictures whether or not they were presented behind a barrier.

EEG Results: Stronger μ and β Desynchronization for Real Objects versus Pictures Is Attenuated When a Barrier Is Positioned between Observer and Stimulus

Our analysis of the EEG data focused on contrasting display-format-dependent changes in ERSP (Makeig, 1993) power within central electrodes over the parietal cortex. First, we compared ERSP power for real objects versus pictures during the No-Barrier session where the stimuli were accessible for grasping (Marini et al., 2019). Figure 3A (left) shows time–frequency log power of ERSP for real objects and pictures in the No-Barrier condition. Importantly, stimuli in both display formats elicited robust desynchronization in the μ, lower β, and upper β frequency bands, starting at ∼200 msec post-onset. Figure 3A (right) shows the difference in the average log power between real objects and pictures using the contrast [RealNB − PictureNB]; Table 1 provides a list of all significant clusters. For visualization purposes, the dashed line overlaid on the contrast shown in Figure 3A (right; also Figure 3B and C, right) demarcates the approximate boundary within which μ and β ERD was observed in the Real and Picture conditions, as shown in Figure 3A and B (left, dark blue region). Importantly, we found stronger desynchronization for real objects versus pictures in several time–frequency clusters in the μ frequency band; there were two distinct clusters during the stimulus presentation window (∼250–350 and ∼450–750 msec) and two clusters after stimulus offset (∼850–1000 and ∼1200–1350 msec), one of which (∼850–1000 msec) extended into the lower β (14–18 Hz) frequency band. The time course of stronger μ and lower β desynchronization for real objects versus pictures in our No-Barrier condition matches closely the findings of Marini et al. (2019), confirming that real objects trigger stronger and more prolonged motor preparation signals than do matched pictures of the same items when the stimuli are physically accessible—a real object advantage.

Figure 3.

Time–frequency log power of ERSP over central electrodes (shown on topographic maps in the upper right corner) averaged across all participants and objects. ERSP results are displayed in separate panels for the No-Barrier (A) and Barrier (B) conditions. Within each of these two panels, the graphs on the left display log power spectra for real objects (top) and pictures (bottom). Graphs on the right show the difference in the average log power between the display format conditions, using the contrast (Real − Picture). Solid black outlines demarcate areas of statistical significance. Dotted line shows approximate region where μ and low β desynchronization was observed in the log power spectra (shown in left plots). Vertical solid and dashed lines denote time of stimulus onset and offset, respectively. (C) Difference in ERSP for real objects versus pictures between the No-Barrier versus Barrier sessions, using the contrast [(RealNB − PictureNB) − (RealB − PictureB)].

Figure 3.

Time–frequency log power of ERSP over central electrodes (shown on topographic maps in the upper right corner) averaged across all participants and objects. ERSP results are displayed in separate panels for the No-Barrier (A) and Barrier (B) conditions. Within each of these two panels, the graphs on the left display log power spectra for real objects (top) and pictures (bottom). Graphs on the right show the difference in the average log power between the display format conditions, using the contrast (Real − Picture). Solid black outlines demarcate areas of statistical significance. Dotted line shows approximate region where μ and low β desynchronization was observed in the log power spectra (shown in left plots). Vertical solid and dashed lines denote time of stimulus onset and offset, respectively. (C) Difference in ERSP for real objects versus pictures between the No-Barrier versus Barrier sessions, using the contrast [(RealNB − PictureNB) − (RealB − PictureB)].

Close modal
Table 1.

Time–Frequency Clusters with Significant Differences in Response to the Presentation of Real Objects (Minus Pictures) in the No-Barrier Condition as Identified in the ERSP Analysis Illustrated in Figure 3A 

ClusterFrequency BandStarts at (msec)Ends at (msec)Frequency Boundary (Lower, in Hz)Frequency Boundary (Upper, in Hz)
μ (D) 245 342 10.8 13.8 
μ (D) 450 742 10.3 13.3 
γ (S) 678 790 40.5 50 
μ/β (D) 870 1003 6.9 20.2 
β/γ (S) 1147 1183 28.6 31.1 
μ (D) 1191 1327 6.9 10.3 
β (S) 1411 1499 18.7 23.7 
ClusterFrequency BandStarts at (msec)Ends at (msec)Frequency Boundary (Lower, in Hz)Frequency Boundary (Upper, in Hz)
μ (D) 245 342 10.8 13.8 
μ (D) 450 742 10.3 13.3 
γ (S) 678 790 40.5 50 
μ/β (D) 870 1003 6.9 20.2 
β/γ (S) 1147 1183 28.6 31.1 
μ (D) 1191 1327 6.9 10.3 
β (S) 1411 1499 18.7 23.7 

Letters next to frequency bands indicate the direction of the effect (D = desynchronization; S = synchronization).

Next, we compared ERSP power for the same stimuli during the Barrier session. Figure 3B (left) shows the time–frequency log power of ERSP for real objects and pictures in the Barrier session; Figure 3B (right) shows the difference between real object and picture trials in the average log power using the contrast [RealB − PictureB]; Table 2 provides a list of all significant clusters. With the barrier in place, we observed significantly stronger μ rhythm ERD for real objects versus pictures late in the stimulus presentation window (∼650–800 msec) and in three clusters after trial offset (∼900–1300 msec), two of which extended slightly into the lower β band. There was no significant ERD or event-related synchronization in the other frequency bands.

Table 2.

Time–Frequency Clusters with Significant Differences in Response to the Presentation of Real Objects (Minus Pictures) in the Barrier Condition as Identified in the ERSP Analysis Illustrated in Figure 3B 

ClusterFrequency BandStarts at (msec)Ends at (msec)Frequency Boundary (Lower, in Hz)Frequency Boundary (Upper, in Hz)
μ (D) 666 794 7.8 12.3 
μ (D) 898 1047 10.8 14.8 
μ (D) 1036 1115 7.8 10.3 
μ (D) 1115 1311 7.8 15.2 
ClusterFrequency BandStarts at (msec)Ends at (msec)Frequency Boundary (Lower, in Hz)Frequency Boundary (Upper, in Hz)
μ (D) 666 794 7.8 12.3 
μ (D) 898 1047 10.8 14.8 
μ (D) 1036 1115 7.8 10.3 
μ (D) 1115 1311 7.8 15.2 

Letters next to frequency bands indicate the direction of the effect (D = desynchronization; S = synchronization).

Finally, to compare directly the temporal dynamics between the two accessibility conditions, we computed the difference in ERSP for real objects versus pictures across the No-Barrier versus Barrier sessions, using the contrast [(RealNB − PictureNB) − (RealB − PictureB)]. The resulting time–frequency ERSP interaction map is shown in Figure 3C; a list of significant clusters is provided in Table 3. The y-axis color scale in Figure 3C reflects the direction of the difference in ERSP between RealNB − PictureNB (Figure 3A, right) and RealB − PictureB (Figure 3B, right), where blue regions reflect greater desynchronization in the No-Barrier (vs. Barrier) session and red areas reflect greater desynchronization in the Barrier (vs. No-Barrier) session. Importantly, for processes associated with μ and low β desynchronization, negative ERSP power change in Figure 3C reflects a stronger real object advantage without a barrier in place, whereas positive ERSP power indicates a stronger real object advantage with the barrier in place. It is evident from Figure 3C that the real object advantage was significantly greater in the No-Barrier versus Barrier session during the period of stimulus presentation as well as early after stimulus offset, as reflected by significant μ (∼450–550 msec) and low β (∼300–350, ∼550–650, and ∼1000–1050 msec) clusters. Later after stimulus offset, however, the real object advantage was comparatively stronger in the Barrier (vs. No-Barrier) session, as evinced by two significant clusters in the μ range (∼1050–1100 and ∼1150–1250 msec). The interaction contrast also revealed several positive clusters that were outside the global region of μ and low β desynchronization (Figure 3A and B, left; Figure 3C, dotted line), reflecting stronger synchronization for real objects versus pictures in the No-Barrier (vs. Barrier) condition (μ: ∼1550–1600 msec; lower β: ∼1425–1475 msec; gamma: 700–800 msec).

Table 3.

Time–Frequency Clusters with Significant Differences for the Interaction Term, Display Format × Accessibility [(RealNB − PictureNB) − (RealB − PictureB)], as Identified in the ERSP Analysis Illustrated in Figure 3C 

ClusterFrequency BandStarts at (msec)Ends at (msec)Frequency Boundary (Lower, in Hz)Frequency Boundary (Upper, in Hz)
β (D) 281 334 16.8 21.7 
μ (D) 466 534 9.3 13.3 
β (D) 554 626 18.2 22.2 
γ (S) 694 754 41 47.5 
γ (S) 694 800 32.1 40 
β (D) 986 1055 19.7 22.2 
μ (S) 1055 1107 8.3 9.8 
μ (S) 1163 1235 11.3 13.8 
β (S) 1431 1471 19.7 22.7 
10 μ (S) 1559 1600 10.3 12.3 
ClusterFrequency BandStarts at (msec)Ends at (msec)Frequency Boundary (Lower, in Hz)Frequency Boundary (Upper, in Hz)
β (D) 281 334 16.8 21.7 
μ (D) 466 534 9.3 13.3 
β (D) 554 626 18.2 22.2 
γ (S) 694 754 41 47.5 
γ (S) 694 800 32.1 40 
β (D) 986 1055 19.7 22.2 
μ (S) 1055 1107 8.3 9.8 
μ (S) 1163 1235 11.3 13.8 
β (S) 1431 1471 19.7 22.7 
10 μ (S) 1559 1600 10.3 12.3 

Letters next to frequency bands indicate the direction of the effect (D = desynchronization; S = synchronization).

Taken together, these EEG results can be summarized as follows. Looking at everyday tools that are strongly associated with action automatically elicits robust μ and lower β ERD, whether the stimuli are displayed as real objects or pictures. However, there are differences in the strength of μ and lower β desynchronization across display formats over time, and these differences are modulated by whether or not the stimulus is physically accessible. In the No-Barrier condition, real objects elicited stronger and more prolonged μ desynchronization than did pictures throughout the period of stimulus presentation as well as early after stimulus offset. With a barrier in place, real objects only elicited stronger μ desynchronization than did pictures late in the period of stimulus presentation and after stimulus offset. Direct contrasts confirmed that the real object advantage was significantly stronger in the No-Barrier (vs. Barrier) condition during the window of stimulus presentation and after trial offset, although there was a late amplification in the Barrier condition (>1000 msec after stimulus onset).

Previous studies have begun to reveal that real objects elicit unique effects on cognition and behavior compared to pictures of objects, yet the underlying neural mechanisms for these differences are poorly understood. We measured desynchronization of μ and β rhythms over the human dorsal cortex to test whether neural signatures of automatic motor preparation for action are stronger and more sustained for real objects compared to pictures of objects and, critically, to examine whether this pattern is modulated by a barrier that disrupts the immediate graspability of the stimuli. Observers made verbal effort-to-use judgments about everyday tools that were presented within reach, either as real solid objects or as matched high-resolution 2-D pictures of the same items. In separate testing sessions, participants viewed the stimuli unobstructed or behind a large transparent barrier. Analysis of the behavioral rating data revealed that real objects were perceived overall to be more effortful to use than their corresponding pictures, similar to Marini et al. (2019); interestingly however, this was the case whether or not the stimuli were presented behind a barrier. In the analysis of the EEG data, first, looking at both real tools and pictures of tools elicited robust ERD in the μ and low β frequency range, consistent with previous EEG studies (Marini et al., 2019; Wamain et al., 2016; Suzuki et al., 2014; Proverbio, 2012; Proverbio et al., 2011) and with other neuroimaging studies showing selective responses to actionable objects such as tools in the dorsal cortex, without the requirement to perform a manual grasping action (Matić et al., 2020; Proverbio, Azzari, & Adorni, 2013; Konen & Kastner, 2008; Valyear et al., 2007; Creem-Regehr & Lee, 2005; Chao & Martin, 2000). Second, when the stimuli were presented unobstructed and within reach of the observer, real objects triggered stronger and more sustained event-related μ and low β desynchronization than when the stimuli were presented as pictures. This real object advantage in EEG signatures of automatic action preparation was apparent both during the period of stimulus presentation and after stimulus offset, replicating the previously reported temporal patterns observed by Marini et al. (2019). Critically, we predicted that if the real object advantage arises because solid objects lend themselves to in-the-moment grasping (whereas pictures do not), then this effect should be reduced or eliminated by imposition of the barrier. In line with this prediction, we found that when the stimuli appeared behind a barrier, the real object advantage was no longer apparent early during the window of stimulus presentation but began to emerge later in the trial near the time of stimulus offset. A direct contrast of desynchronization differences between real objects and pictures across the two accessibility conditions confirmed that, when the stimuli were presented unobstructed, the real object advantage was significantly stronger during the window of stimulus presentation and early after trial offset, compared to when the stimuli appeared behind a barrier. Taken together, these EEG findings demonstrate that a barrier manipulation has temporally distinct effects on cortical signatures of the real object advantage, whereby early differences are attenuated but more slowly evolving differences remain unaffected. The results are intriguing because they suggest that differences in brain responses to real objects and pictures may arise from distinct causal mechanisms that operate on different time scales.

Our EEG results support behavioral findings in human observers showing that the imposition of a barrier attenuates real object effects on selective attention (Gomez et al., 2018) and on valuation (Bushong et al., 2010). Yet, the present experiment reveals the nuances of the real object advantage that these studies have not, by showing that the barrier predominantly attenuates early (more so than later-evolving) temporal signatures of the effect. Although a barrier has been shown to modulate responses to images of graspable objects (Cardellicchio, Sinigaglia, & Costantini, 2013), the important finding here is that the barrier eliminates early differences in action-related cortical responses to real objects (vs. pictures) that are otherwise apparent when there is no barrier in place. Yet, we did not observe an effect of the barrier in the behavioral measure of effort-to-use perceptual ratings. Other studies have found that a barrier did not influence behavior, as evidenced by a failure to disrupt cross-modal integration in visuotactile extinction patients (Farnè, Demattè, & Làdavas, 2003) and in neurologically healthy observers (Kitagawa & Spence, 2005). However, in these cross-modal attention studies, the visual stimuli were finger movements made by the experimenter, or flashing LED lights (respectively), rather than graspable objects. It is possible that a barrier predominantly affects processing of action-related stimuli, such as tools. Consistent with this idea, Garrido-Vásquez and Schubö (2014) used a probe detection task to show that visual attention to 3-D objects in a virtual reality (VR) environment was allocated preferentially to a graspable object (a mug) versus a visually similar nongraspable object (a spiky cactus), but only when it was perceived to be within reachable space. However, this does not explain why we observed an effect of the barrier in early cortical EEG responses but not in measures of behavior. Given the temporally specific effect of the barrier in our EEG data, the nature of the task and the type of response may have a critical influence on display format and accessibility effects. In our task, participants were required to wait until well after stimulus offset to initiate their behavioral response (to avoid disrupting the EEG recordings), by which time the effect of the barrier on EEG responses had elapsed. In this respect, our EEG results highlight avenues for future research to evaluate whether a barrier has a more powerful effect on tasks that require immediate responses to objects (Gomez et al., 2018) than on tasks that involve a delay between stimulus presentation and response initiation.

Why Does a Transparent Barrier Modulate Early Neural Signatures of the “Real Object Advantage”?

The transparent barrier modulated the accessibility of the stimuli while holding constant their visual characteristics, including their egocentric distance and stereoscopic depth cues. Importantly, the barrier did not interfere with participants' ability to distinguish between the two display formats, as illustrated in Figure 1 and as we confirmed in a separate sample of observers (see Methods). Nor did the barrier influence observers' ability to recognize the tools, as evidenced by the fact that the onset of μ and low β desynchronization for real objects and pictures was comparable in the No-Barrier (Figure 3A, left) and Barrier (Figure 3B, left) conditions.

Combined with the findings from previous behavioral studies involving barriers, described above (Gomez et al., 2018; Bushong et al., 2010), the results of our EEG experiment are consistent with the notion that a barrier disrupts automatic planning of potential in-the-moment motor interactions with real objects. What are the underlying neural mechanisms that could give rise to this effect? In nonhuman primates, visually responsive “object-type” neurons in the anterior intraparietal (AIP) area of dorsal cortex code the shape of geometric solids that are fixated by the animal, within several hundred milliseconds after stimulus onset (Murata, Gallese, Luppino, Kaseda, & Sakata, 2000; Sakata et al., 1998; Sakata, Taira, Murata, & Mine, 1995). AIP in the macaque has strong connections with ventral premotor cortex (area F5; Borra et al., 2008), whose response to geometric solids is attenuated when a transparent barrier is positioned between the animal and the stimulus (Bonini, Maranesi, Livi, Fogassi, & Rizzolatti, 2014; Caggiano, Fogassi, Rizzolatti, Thier, & Casile, 2009). In humans, anatomo-functional connections have also been identified between AIP sulcus (the putative human homologue of monkey AIP) and ventral premotor area (the human homologue of area F5 in the monkey; Davare, Rothwell, & Lemon, 2010). Another region of human dorsal cortex, superior parieto-occipital cortex, automatically responds to reachable solid objects when no reach is performed (Cavina-Pratesi et al., 2006) and is more strongly activated when the solids lie within versus outside the observer's reach (Gallivan, McLean, & Culham, 2011; Cavina-Pratesi et al., 2010; Gallivan et al., 2009). In this context, our EEG findings lend support to an emerging neural framework wherein dorsal regions, possibly in conjunction with ventral premotor areas, represent objects in the current egocentric environment that are viable targets for immediate manual interaction.

An alternative interpretation of the effect of the barrier on EEG responses is that it disrupts processes associated with anticipating the somatomotor consequences of interacting with objects. Real objects provide mechanoreceptive and proprioceptive feedback (together known as haptic feedback) when the fingers come into contact with the surface of the object, whereas images of objects typically do not. Converging evidence from kinematic studies of object grasping suggests that real objects and pictures are processed differently primarily because of the availability of haptic feedback information at the time of the grasp. Specifically, when observers grasp line drawings, or even realistic photographs of 2-D images of objects, just-noticeable-difference (JND) scores at the time of peak grip aperture vary according to the size of the object, indicating that sensitivity to the change in size is not absolute but is relative to the size of the stimulus (consistent with Weber's law; Ozana et al., 2020; Ozana & Ganel, 2019; Holmes & Heath, 2013). However, when haptic feedback is provided at the end of a grasp to a 2-D stimulus, JNDs conform to the absolute size of the stimulus (violating Weber's law; Hosang et al., 2016), similar to the analytic JNDs observed for solid objects (Heath et al., 2011; Ganel et al., 2008). Similarly, JNDs for VR objects adhere to Weber's law, unless haptic feedback is provided at the time of the grasp, in which case JNDs violate Weber's law (Ozana et al., 2020; Ozana, Berman, & Ganel, 2018). Critically, haptic feedback has similar effects on the kinematics of real object grasping: When haptic feedback is denied by instructing participants to end their grasp above the object without actually touching it, JNDs adhere to Weber's law (Ozana & Ganel, 2019). However, if only partial tactile feedback is provided (by overlaying a transparent glass barrier over the top of the to-be-grasped real object so that the feedback is indirect and imprecise), analytic grasping patterns are restored (Ozana & Ganel, 2019). Taken together, these behavioral studies suggest that the cognitive processes that support grasping are governed by whether or not the action terminates with haptic feedback. Evidence from human fMRI suggests that such anticipatory computations may arise in the dorsal cortex. Freud et al. (2018) found that the left AIP sulcus, which is involved in visually guided grasping, differentiates both the format in which an object is presented (real vs. 2-D image) and the type of action performed on the object (i.e., reach vs. grasp). Importantly, these format- and task-selective responses in the dorsal cortex in Freud et al.'s (2018) study were not observed in other motor or somatosensory cortices, and they emerged during the “planning phase” on each trial before the grasping action was initiated, indicating that the representations did not arise from actual motor, proprioceptive, or somatosensory feedback. Rather, it seems that the representations reflected planning or anticipation of the impending grasping action. The authors speculated as to whether the visuomotor system generates a feedforward predictive model of planned actions that incorporates the constraints and tactile consequences of real object interaction (Freud et al., 2018; see also Säfström & Edin, 2008).

Taken together, therefore, the results from these behavioral and fMRI studies of object grasping raise the question of whether the barrier in our EEG study modulated cortical processes associated with anticipating the somatosensory consequences of interacting with the (real) objects. In our study, participants were never required to explicitly perform a grasping action toward, or to come into contact with, the stimuli, so they never explicitly anticipated somatosensory feedback. On the basis of the available evidence from behavioral kinematics reviewed above, the prediction would be that if there is no haptic feedback provided during the task, then the real objects and images should be processed similarly, and there should be no effect of the barrier manipulation—but this is not what we have observed in our EEG (or behavioral) results. The barrier in our study could have modulated anticipation of the potential for somatosensory feedback. After all, the mere expectation of haptic feedback is sufficient to modulate grasping kinematics toward real objects (Whitwell, Katz, Goodale, & Enns, 2020). Nevertheless, it is unclear whether expectations of haptic feedback are sufficient to modulate perception of real objects and pictures when the task does not explicitly involve grasping. For example, similar patterns are evident in studies that have used other experimental approaches, such as those that have examined breakthrough to awareness for objects that have been rendered unconscious using continuous flash suppression. Using this approach, real objects break through into awareness sooner than do colored 2-D photographs or computer images of objects, although there is no requirement for the observer to act upon or contact the stimuli (Mudrik & Korisky, 2020; Korisky, Hirschhorn, & Mudrik, 2019). Yet, somatosensory feedback can influence breakthrough: When an observer is asked to manually rotate a platform upon which is presented a highly realistic VR or augmented reality (AR) object, breakthrough to awareness of the virtual stimulus is faster when the visual rotation of the virtual object is coupled with the live sensorimotor act of rotating the platform, compared to when the motor action is paired with prerecorded visual information that is mismatched in movement timing (Suzuki, Schwartzman, Augusto, & Seth, 2019). A fruitful direction for future research will be to determine whether haptic feedback governs the processes that are brought to bear on real objects and images more so during action- versus perception-related tasks.

Nevertheless, our findings align with recent arguments that the dorsal cortex processes visual object attributes with respect to the constraints they impose for perception in the service of action—a view that contrasts with traditional frameworks that ascribe a more monolithic role for the dorsal cortex that focuses solely on action (Freud et al., 2020).

What Do More Slowly Evolving Neural, and Behavioral, Signatures of the “Real Object Advantage” Represent?

The barrier manipulation in our study attenuated early, but not later-evolving, desynchronization differences across display formats. This late EEG signature of the real object advantage mirrors the pattern we observed in the behavioral data, in which effort-to-use ratings were higher overall for real objects than for 2-D images but were also unaffected by the barrier. As we highlighted earlier, participants in our task were required to wait until after stimulus offset to initiate their behavioral responses—well after the effect of the barrier on EEG responses had expired. However, this raises the question of what the more slowly evolving neural and behavioral signatures of the real object advantage represent. Marini et al. (2019), provided persuasive evidence that the late EEG signature of the real object advantage did not reflect differences in stereoscopic cues conveyed by real objects versus 2-D pictures, by showing that early visual ERP amplitude differences, which are typically associated with stereopsis, did not modulate display-format-related cortical brain dynamics early or late in the event-related time courses. In addition, as we discussed earlier, it is unclear whether predictions about the somatosensory consequences of interacting with real objects can influence perception when the observer is not required to perform a manual grasp toward the stimulus.

The more slowly evolving cortical amplification in μ and β rhythm desynchronization for real objects versus pictures, and the overall behavioral differences across display formats in effort ratings, could reflect processing of inherent physical characteristics of real objects (whether or not they are grasped). After all, even when environmental constraints limit the accessibility of real objects, the objects themselves are fundamentally tangible solids that have a definite egocentric distance, real-world size, surface texture, weight, and compliance (unlike pictures that do not have these qualities). Real objects, when viewed with two eyes, convey unambiguous information about their egocentric distance, and therefore, their physical size is known; conversely, for pictures, whereas the distance of the display surface is known, the distance to the object depicted within the picture is not, and thus, the size of the object is ambiguous. Distance and size are important cues that guide actions toward objects as well as object perception. In neuropsychological patients with visual agnosia who may rely predominantly on object-related processes in the dorsal cortex (because of lesions of object areas in the ventral cortex), the display format and size of the stimulus can have profound effects on recognition. Although patients with agnosia have difficulty recognizing pictures of objects, they can show a surprising preservation in their ability to recognize real objects (Farah, 2004; Turnbull et al., 2004; Chainay & Humphreys, 2001; Riddoch & Humphreys, 1987; Ratcliff & Newcombe, 1982). Importantly, whether patients with agnosia show a real object advantage in recognition depends on the physical size of the stimulus (Holler et al., 2019); recognition is best when the object's physical size matches the typical real-world size, whereas performance is impaired (and similar to pictures) when physical size is larger or smaller than the real-world size. Analogous effects of familiar size and display format are evident in looking behavior in infants (Sensoy et al., 2021). In nonhuman primates, size coding of solid objects has been observed in neural populations in the dorsal cortex (Murata et al., 2000; Sakata et al., 1995; Taira, Mine, Georgopoulos, Murata, & Sakata, 1990). These findings, together with our EEG results, support recent arguments that the dorsal cortex may be specialized for processing the shape and size of solid objects (Freud et al., 2020; Fabbri, Stubbs, Cusack, & Culham, 2016).

The notion that real objects are more likely than 2-D pictures to be processed with respect to enduring physical characteristics, such as size and weight, is further supported by behavioral evidence in adult observers. Holler et al. (2020) examined whether the spatial arrangements of objects in a sorting task were similar or different for stimuli that were presented in different display formats. Two-dimensional pictures were predominantly arranged with respect to their typical location (a semantic characteristic), whereas the real objects were processed using a richer framework that incorporated both semantic and physical characteristics, such as their real-world size and weight (Holler et al., 2020). In the study by Holler et al. (2020), however, the 2-D pictures were scaled to fit a 27-in. computer monitor, whereas in the current study, the 2-D pictures were matched in retinal size to their real-object counterparts. If the “late” EEG signature of the real object advantage observed here indeed reflects amplified processing of real objects (vs. pictures) because of the presence of physical size cues, it is noteworthy that the effects on cortical responses are apparent without scaling of the pictures. VR and AR stimuli, which can be arbitrarily scaled in size and are more similar to real objects than are 2-D pictures with respect to their inherent actionability, may provide a promising avenue for disentangling which object properties contribute to the late-evolving real object advantage. Oddly, however, AR stimuli are processed according to their anticipated weight, despite the fact that they have no mass (Holler et al., 2020). Object mass seems to be represented alongside other physical variables in the frontoparietal cortex, which anticipates the dynamics of objects, automatically and independently from the ventral cortex (Schwettmann, Tenenbaum, & Kanwisher, 2019). The extent to which such representations remain invariant or are malleable across changes in stimulus format remains an important question for future research (Fairchild & Snow, 2020).

Previous studies of the anatomical substrates of the μ rhythm suggest that μ rhythm signals in the range of ∼8–13 Hz originate from the primary somatosensory cortex for the hand, whereas the concomitant signals in the β range originate from the primary motor cortex (Hari & Salmelin, 1997; Salmelin, Hämäläinen, Kajola, & Hari, 1995). Both μ and β ERD are associated with the observation and planning of manual action, but β ERD is especially modulated by the egocentric position of the stimulus (Angelini et al., 2018). Indeed, we found that introduction of the barrier (which precludes manual interaction but does not change objects' tactile properties) seemed to reduce the observed β desynchronization more so than the μ desynchronization. Therefore, it may be that the higher-frequency ERD differences we observed for real objects (vs. pictures) originate in the primary motor cortex and reflect real objects' potential for manual interaction, whereas the lower-frequency ERD differences originate in the somatosensory cortex and reflect real objects' anticipated haptic properties. Future studies could use source localization to test whether different cortical regions are responsible for different frequency components of the observed ERD. If the observed μ and β ERD do in fact originate from the primary somatosensory and primary motor cortices, respectively, this could help to elucidate the contributions toward the real object advantage made by real objects' potential for manual interaction versus their potential for haptic feedback. Source localization could also reveal possible lateralization of the effects reported here. After all, left-lateralized biases have been demonstrated not only in the context of μ desynchronization with action-related stimuli such as tools (Proverbio, 2012), and actionable real objects (Marini et al., 2019) but also in ERP, fMRI, and transcranial magnetic stimulation studies (Cardellicchio et al., 2011, 2013; Proverbio et al., 2011; Creem-Regehr & Lee, 2005). Comparing lateralization of the different frequency components (and whether any such lateralization interacts with a barrier manipulation) could help to clarify the ways in which actionability and haptic features may both contribute to the real object advantage in different conditions of accessibility, at different temporal intervals, and in different frequency bands.

Conclusion

Taken together, our findings suggest that real objects drive dorsal visuomotor brain networks involved in automatic action planning more strongly than pictures of objects do because of the potential they offer for immediate interaction. Nevertheless, whereas in-the-moment graspability accounts for early EEG signatures of the real object advantage, analogous signatures at later time points may reflect processing of inherent characteristics of real objects that distinguish them from pictures, such as their real-world size, weight, or tactile features. Identifying the characteristics of real objects that perpetuate behavioral and neural signatures of the real object advantage represents an important avenue for future research. These findings underscore how an “immersive neuroscience” approach to cognitive neuroscience (Snow & Culham, 2021) can reveal surprising nuances in perception and representation of experimental proxies for reality versus actual real-world stimuli.

This work was supported by grants from the National Science Foundation (grant number 1632849 to J. C. S.), the National Eye Institute of the National Institutes of Health (NIH; grant number R01EY026701 to J. C. S.), and the National Institute of General Medical Sciences of the NIH (grant number P20 GM103650). We thank Katherine Breeding, Arunima Chakraborty, and Sarah Olin for assistance with data collection.

Reprint requests should be sent to Jacqueline C. Snow, Department of Psychology, University of Nevada, Reno, 1664 N. Virginia Street, Mail Stop 296, Reno, NV 89557, or via e-mail: [email protected].

Grant T. Fairchild: Data curation; Formal analysis; Investigation; Methodology; Project administration; Resources; Software; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Francesco Marini: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Jacqueline C. Snow: Conceptualization; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Supervision; Visualization; Writing—Original draft; Writing—Review & editing.

Jacqueline C. Snow, National Eye Institute (https://dx.doi.org/10.13039/100000053), grant number: R01EY026701. Jacqueline C. Snow, Office of Integrative Activities (https://dx.doi.org/10.13039/100000106), grant number: 1632849. Institutional grant, National Institute of General Medical Sciences (https://dx.doi.org/10.13039/100000057), grant number: P20GM103650. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF or NIH.

A retrospective analysis of the citations in every article published in this journal from 2010 to 2020 has revealed a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .408, W(oman)/M = .335, M/W = .108, and W/W = .149, the comparable proportions for the articles that these authorship teams cited were M/M = .579, W/M = .243, M/W = .102, and W/W = .076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance. The authors of this article report its proportions of citations by gender category to be as follows: M/M = .406, W/M = .139, M/W = .158, and W/W = .297.

Adolph
,
K. E.
,
Eppler
,
M. A.
, &
Gibson
,
E. J.
(
1993
).
Development of perception of affordances
.
Advances in Infancy Research
,
8
,
51
98
.
Angelini
,
M.
,
Fabbri-Destro
,
M.
,
Lopomo
,
N. F.
,
Gobbo
,
M.
,
Rizzolatti
,
G.
, &
Avanzini
,
P.
(
2018
).
Perspective-dependent reactivity of sensorimotor mu rhythm in alpha and beta ranges during action observation: An EEG study
.
Scientific Reports
,
8
,
12429
. ,
[PubMed]
Başar
,
E.
,
Başar-Eroğlu
,
C.
,
Karakaş
,
S.
, &
Schürmann
,
M.
(
1999
).
Are cognitive processes manifested in event-related gamma, alpha, theta and delta oscillations in the EEG?
Neuroscience Letters
,
259
,
165
168
. ,
[PubMed]
Beaucage
,
N.
,
Skolney
,
J.
,
Hewes
,
J.
, &
Vongpaisal
,
T.
(
2020
).
Multisensory stimuli enhance 3-year-old children's executive function: A three-dimensional object version of the standard Dimensional Change Card Sort
.
Journal of Experimental Child Psychology
,
189
,
104694
. ,
[PubMed]
Bizovičar
,
N.
,
Dreo
,
J.
,
Koritnik
,
B.
, &
Zidar
,
J.
(
2014
).
Decreased movement-related beta desynchronization and impaired post-movement beta rebound in amyotrophic lateral sclerosis
.
Clinical Neurophysiology
,
125
,
1689
1699
. ,
[PubMed]
Bonini
,
L.
,
Maranesi
,
M.
,
Livi
,
A.
,
Fogassi
,
L.
, &
Rizzolatti
,
G.
(
2014
).
Space-dependent representation of objects and other's action in monkey ventral premotor grasping neurons
.
Journal of Neuroscience
,
34
,
4108
4119
. ,
[PubMed]
Borra
,
E.
,
Belmalih
,
A.
,
Calzavara
,
R.
,
Gerbella
,
M.
,
Murata
,
A.
,
Rozzi
,
S.
, et al
(
2008
).
Cortical connections of the macaque anterior intraparietal (AIP) area
.
Cerebral Cortex
,
18
,
1094
1111
. ,
[PubMed]
Brainard
,
D. H.
(
1997
).
The Psychophysics Toolbox
.
Spatial Vision
,
10
,
433
436
. ,
[PubMed]
Bushong
,
B.
,
King
,
L. M.
,
Camerer
,
C. F.
, &
Rangel
,
A.
(
2010
).
Pavlovian processes in consumer choice: The physical presence of a good increases willingness-to-pay
.
American Economic Review
,
100
,
1556
1571
.
Caggiano
,
V.
,
Fogassi
,
L.
,
Rizzolatti
,
G.
,
Thier
,
P.
, &
Casile
,
A.
(
2009
).
Mirror neurons differentially encode the peripersonal and extrapersonal space of monkeys
.
Science
,
324
,
403
406
. ,
[PubMed]
Cardellicchio
,
P.
,
Sinigaglia
,
C.
, &
Costantini
,
M.
(
2011
).
The space of affordances: A TMS study
.
Neuropsychologia
,
49
,
1369
1372
. ,
[PubMed]
Cardellicchio
,
P.
,
Sinigaglia
,
C.
, &
Costantini
,
M.
(
2013
).
Grasping affordances with the other's hand: A TMS study
.
Social Cognitive and Affective Neuroscience
,
8
,
455
459
. ,
[PubMed]
Cavina-Pratesi
,
C.
,
Galletti
,
C.
,
Fattori
,
P.
,
Quinlan
,
D. J.
,
Goodale
,
M. A.
, &
Culham
,
J. C.
(
2006
).
Event-related fMRI reveals a dissociation in the parietal lobes between transport and grip components in reach-to-grasp movements
.
Society for Neuroscience Abstracts
,
32
,
307.12
.
Cavina-Pratesi
,
C.
,
Monaco
,
S.
,
Fattori
,
P.
,
Galletti
,
C.
,
McAdam
,
T. D.
,
Quinlan
,
D. J.
, et al
(
2010
).
Functional magnetic resonance imaging reveals the neural substrates of arm transport and grip formation in reach-to-grasp actions in humans
.
Journal of Neuroscience
,
30
,
10306
10323
. ,
[PubMed]
Chainay
,
H.
, &
Humphreys
,
G. W.
(
2001
).
The real-object advantage in agnosia: Evidence for a role of surface and depth information in object recognition
.
Cognitive Neuropsychology
,
18
,
175
191
. ,
[PubMed]
Chao
,
L. L.
, &
Martin
,
A.
(
2000
).
Representation of manipulable man-made objects in the dorsal stream
.
Neuroimage
,
12
,
478
484
. ,
[PubMed]
Creem-Regehr
,
S. H.
, &
Lee
,
J. N.
(
2005
).
Neural representations of graspable objects: Are tools special?
Cognitive Brain Research
,
22
,
457
469
. ,
[PubMed]
Davare
,
M.
,
Rothwell
,
J. C.
, &
Lemon
,
R. N.
(
2010
).
Causal connectivity between the human anterior intraparietal area and premotor cortex during grasp
.
Current Biology
,
20
,
176
181
. ,
[PubMed]
DeLoache
,
J. S.
,
Pierroutsakos
,
S. L.
,
Uttal
,
D. H.
,
Rosengren
,
K. S.
, &
Gottlieb
,
A.
(
1998
).
Grasping the nature of pictures
.
Psychological Science
,
9
,
205
210
.
DeLoache
,
J. S.
,
Strauss
,
M. S.
, &
Maynard
,
J.
(
1979
).
Picture perception in infancy
.
Infant Behavior and Development
,
2
,
77
89
.
Delorme
,
A.
, &
Makeig
,
S.
(
2004
).
EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis
.
Journal of Neuroscience Methods
,
134
,
9
21
. ,
[PubMed]
Erlikhman
,
G.
,
Caplovitz
,
G. P.
,
Gurariy
,
G.
,
Medina
,
J.
, &
Snow
,
J. C.
(
2018
).
Towards a unified perspective of object shape and motion processing in human dorsal cortex
.
Consciousness and Cognition
,
64
,
106
120
. ,
[PubMed]
Fabbri
,
S.
,
Stubbs
,
K. M.
,
Cusack
,
R.
, &
Culham
,
J. C.
(
2016
).
Disentangling representations of object and grasp properties in the human brain
.
Journal of Neuroscience
,
36
,
7648
7662
. ,
[PubMed]
Fairchild
,
G.
, &
Snow
,
J. C.
(
2020
).
Physical inference: How the brain represents mass
.
eLife
,
9
,
e54373
. ,
[PubMed]
Farah
,
M. J.
(
2004
).
Visual agnosia
(2nd ed.).
Cambridge, MA
:
MIT Press
.
Farnè
,
A.
,
Demattè
,
M. L.
, &
Làdavas
,
E.
(
2003
).
Beyond the window: Multisensory representation of peripersonal space across a transparent barrier
.
International Journal of Psychophysiology
,
50
,
51
61
. ,
[PubMed]
Festante
,
F.
,
Vanderwert
,
R. E.
,
Sclafani
,
V.
,
Paukner
,
A.
,
Simpson
,
E. A.
,
Suomi
,
S. J.
, et al
(
2018
).
EEG beta desynchronization during hand goal-directed action observation in newborn monkeys and its relation to the emergence of hand motor skills
.
Developmental Cognitive Neuroscience
,
30
,
142
149
. ,
[PubMed]
Freud
,
E.
,
Behrmann
,
M.
, &
Snow
,
J. C.
(
2020
).
What does dorsal cortex contribute to perception?
Open Mind: Discoveries in Cognitive Science
,
4
,
40
56
. ,
[PubMed]
Freud
,
E.
,
Macdonald
,
S. N.
,
Chen
,
J.
,
Quinlan
,
D. J.
,
Goodale
,
M. A.
, &
Culham
,
J. C.
(
2018
).
Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations
.
Cortex
,
98
,
34
48
. ,
[PubMed]
Gallivan
,
J. P.
,
Cavina-Pratesi
,
C.
, &
Culham
,
J. C.
(
2009
).
Is that within reach? fMRI reveals that the human superior parieto-occipital cortex encodes objects reachable by the hand
.
Journal of Neuroscience
,
29
,
4381
4391
. ,
[PubMed]
Gallivan
,
J. P.
,
McLean
,
A.
, &
Culham
,
J. C.
(
2011
).
Neuroimaging reveals enhanced activation in a reach-selective brain area for objects located within participants' typical hand workspaces
.
Neuropsychologia
,
49
,
3710
3721
. ,
[PubMed]
Ganel
,
T.
,
Chajut
,
E.
, &
Algom
,
D.
(
2008
).
Visual coding for action violates fundamental psychophysical principles
.
Current Biology
,
18
,
R599
R601
. ,
[PubMed]
Garrido-Vásquez
,
P.
, &
Schubö
,
A.
(
2014
).
Modulation of visual attention by object affordance
.
Frontiers in Psychology
,
5
,
59
. ,
[PubMed]
Gerhard
,
T. M.
,
Culham
,
J. C.
, &
Schwarzer
,
G.
(
2016
).
Distinct visual processing of real objects and pictures of those objects in 7- to 9-month-old infants
.
Frontiers in Psychology
,
7
,
827
. ,
[PubMed]
Gerhard
,
T. M.
,
Culham
,
J. C.
, &
Schwarzer
,
G.
(
2021
).
Manual exploration of objects is related to 7-month-old infants' visual preference for real objects
.
Infant Behavior and Development
,
62
,
101512
. ,
[PubMed]
Gibson
,
E. J.
(
1988
).
Exploratory behavior in the development of perceiving, acting, and the acquiring of knowledge
.
Annual Review of Psychology
,
39
,
1
42
.
Gomez
,
M. A.
,
Skiba
,
R. M.
, &
Snow
,
J. C.
(
2018
).
Graspable objects grab attention more than images do
.
Psychological Science
,
29
,
206
218
. ,
[PubMed]
Gomez
,
M. A.
, &
Snow
,
J. C.
(
2017
).
Action properties of object images facilitate visual search
.
Journal of Experimental Psychology: Human Perception and Performance
,
43
,
1115
1124
. ,
[PubMed]
Grandchamp
,
R.
, &
Delorme
,
A.
(
2011
).
Single-trial normalization for event-related spectral decomposition reduces sensitivity to noisy trials
.
Frontiers in Psychology
,
2
,
236
. ,
[PubMed]
Hari
,
R.
(
2006
).
Action–perception connection and the cortical mu rhythm
.
Progress in Brain Research
,
159
,
253
260
.
Hari
,
R.
, &
Salmelin
,
R.
(
1997
).
Human cortical oscillations: A neuromagnetic view through the skull
.
Trends in Neurosciences
,
20
,
44
49
. ,
[PubMed]
Heath
,
M.
,
Mulla
,
A.
,
Holmes
,
S. A.
, &
Smuskowitz
,
L. R.
(
2011
).
The visual coding of grip aperture shows an early but not late adherence to Weber's law
.
Neuroscience Letters
,
490
,
200
204
. ,
[PubMed]
Holler
,
D. E.
,
Behrmann
,
M.
, &
Snow
,
J. C.
(
2019
).
Real-world size coding of solid objects, but not 2-D or 3-D images, in visual agnosia patients with bilateral ventral lesions
.
Cortex
,
119
,
555
568
. ,
[PubMed]
Holler
,
D. E.
,
Fabbri
,
S.
, &
Snow
,
J. C.
(
2020
).
Object responses are highly malleable, rather than invariant, with changes in object appearance
.
Scientific Reports
,
10
,
4654
. ,
[PubMed]
Holmes
,
S. A.
, &
Heath
,
M.
(
2013
).
Goal-directed grasping: The dimensional properties of an object influence the nature of the visual information mediating aperture shaping
.
Brain and Cognition
,
82
,
18
24
. ,
[PubMed]
Hosang
,
S.
,
Chan
,
J.
,
Jazi
,
S. D.
, &
Heath
,
M.
(
2016
).
Grasping a 2D object: Terminal haptic feedback supports an absolute visuo-haptic calibration
.
Experimental Brain Research
,
234
,
945
954
. ,
[PubMed]
Kingstone
,
A.
(
2009
).
Taking a real look at social attention
.
Current Opinion in Neurobiology
,
19
,
52
56
. ,
[PubMed]
Kitagawa
,
N.
, &
Spence
,
C.
(
2005
).
Investigating the effect of a transparent barrier on the crossmodal congruency effect
.
Experimental Brain Research
,
161
,
62
71
. ,
[PubMed]
Kleiner
,
M.
,
Brainard
,
D.
,
Pelli
,
D.
,
Ingling
,
A.
,
Murray
,
R.
, &
Broussard
,
C.
(
2007
).
What's new in Psychtoolbox-3?
Perception
,
36
,
1
16
.
Klimesch
,
W.
(
1999
).
EEG alpha and theta oscillations reflect cognitive and memory performance: A review and analysis
.
Brain Research Reviews
,
29
,
169
195
. ,
[PubMed]
Konen
,
C. S.
, &
Kastner
,
S.
(
2008
).
Two hierarchically organized neural systems for object information in human visual cortex
.
Nature Neuroscience
,
11
,
224
231
. ,
[PubMed]
Korisky
,
U.
,
Hirschhorn
,
R.
, &
Mudrik
,
L.
(
2019
).
“Real-life” continuous flash suppression (CFS)-CFS with real-world objects using augmented reality goggles
.
Behavior Research Methods
,
51
,
2827
2839
. ,
[PubMed]
Laufs
,
H.
,
Kleinschmidt
,
A.
,
Beyerle
,
A.
,
Eger
,
E.
,
Salek-Haddadi
,
A.
,
Preibisch
,
C.
, et al
(
2003
).
EEG-correlated fMRI of human alpha activity
.
Neuroimage
,
19
,
1463
1476
. ,
[PubMed]
Leutheuser
,
H.
,
Gabsteiger
,
F.
,
Hebenstreit
,
F.
,
Reis
,
P.
,
Lochmann
,
M.
, &
Eskofier
,
B.
(
2013
).
Comparison of the AMICA and the InfoMax algorithm for the reduction of electromyogenic artifacts in EEG data
. In
2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
(pp.
6804
6807
).
Osaka, Japan
:
IEEE
. ,
[PubMed]
Makeig
,
S.
(
1993
).
Auditory event-related dynamics of the EEG spectrum and effects of exposure to tones
.
Electroencephalography and Clinical Neurophysiology
,
86
,
283
293
. ,
[PubMed]
Makeig
,
S.
,
Westerfield
,
M.
,
Jung
,
T.-P.
,
Enghoff
,
S.
,
Townsend
,
J.
,
Courchesne
,
E.
, et al
(
2002
).
Dynamic brain sources of visual evoked responses
.
Science
,
295
,
690
694
. ,
[PubMed]
Marini
,
F.
,
Breeding
,
K. A.
, &
Snow
,
J. C.
(
2019
).
Distinct visuo-motor brain dynamics for real-world objects versus planar images
.
Neuroimage
,
195
,
232
242
. ,
[PubMed]
Matić
,
K.
,
Op de Beeck
,
H.
, &
Bracci
,
S.
(
2020
).
It's not all about looks: The role of object shape in parietal representations of manual tools
.
Cortex
,
133
,
358
370
. ,
[PubMed]
Mudrik
,
L.
, &
Korisky
,
U.
(
2020
).
From 2D to 3D: Enhanced access to human conscious awareness
.
PsyArXiv
.
Murata
,
A.
,
Gallese
,
V.
,
Luppino
,
G.
,
Kaseda
,
M.
, &
Sakata
,
H.
(
2000
).
Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP
.
Journal of Neurophysiology
,
83
,
2580
2601
. ,
[PubMed]
Muthukumaraswamy
,
S. D.
, &
Johnson
,
B. W.
(
2004
).
Primary motor cortex activation during action observation revealed by wavelet analysis of the EEG
.
Clinical Neurophysiology
,
115
,
1760
1766
. ,
[PubMed]
Ozana
,
A.
,
Berman
,
S.
, &
Ganel
,
T.
(
2018
).
Grasping trajectories in a virtual environment adhere to Weber's law
.
Experimental Brain Research
,
236
,
1775
1787
. ,
[PubMed]
Ozana
,
A.
, &
Ganel
,
T.
(
2019
).
Weber's law in 2D and 3D grasping
.
Psychological Research
,
83
,
977
988
. ,
[PubMed]
Ozana
,
A.
,
Namdar
,
G.
, &
Ganel
,
T.
(
2020
).
Active visuomotor interactions with virtual objects on touchscreens adhere to Weber's law
.
Psychological Research
,
84
,
2144
2156
. ,
[PubMed]
Palmer
,
J. A.
,
Kreutz-Delgado
,
K.
, &
Makeig
,
S.
(
2011
).
AMICA: An adaptive mixture of independent component analyzers with shared components (Technical report)
.
San Diego, CA
:
Swartz Center for Computatonal Neursoscience, University of California San Diego
.
Perry
,
A.
, &
Bentin
,
S.
(
2009
).
Mirror activity in the human brain while observing hand movements: A comparison between EEG desynchronization in the μ-range and previous fMRI results
.
Brain Research
,
1282
,
126
132
. ,
[PubMed]
Perry
,
A.
,
Stein
,
L.
, &
Bentin
,
S.
(
2011
).
Motor and attentional mechanisms involved in social interaction—Evidence from mu and alpha EEG suppression
.
Neuroimage
,
58
,
895
904
. ,
[PubMed]
Pfurtscheller
,
G.
(
2001
).
Functional brain imaging based on ERD/ERS
.
Vision Research
,
41
,
1257
1260
. ,
[PubMed]
Pfurtscheller
,
G.
,
Brunner
,
C.
,
Schlögl
,
A.
, &
Lopes da Silva
,
F. H.
(
2006
).
Mu rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks
.
Neuroimage
,
31
,
153
159
. ,
[PubMed]
Pfurtscheller
,
G.
,
Neuper
,
C.
,
Andrew
,
C.
, &
Edlinger
,
G.
(
1997
).
Foot and hand area mu rhythms
.
International Journal of Psychophysiology
,
26
,
121
135
. ,
[PubMed]
Pierroutsakos
,
S. L.
, &
DeLoache
,
J. S.
(
2003
).
Infants' manual exploration of pictorial objects varying in realism
.
Infancy
,
4
,
141
156
.
Pierroutsakos
,
S. L.
, &
Troseth
,
G. L.
(
2003
).
Video verite: Infants' manual investigation of objects on video
.
Infant Behavior and Development
,
26
,
183
199
.
Pineda
,
J. A.
(
2005
).
The functional significance of mu rhythms: Translating “seeing” and “hearing” into “doing.”
Brain Research Reviews
,
50
,
57
68
. ,
[PubMed]
Pion-Tonachini
,
L.
,
Kreutz-Delgado
,
K.
, &
Makeig
,
S.
(
2019
).
ICLabel: An automated electroencephalographic independent component classifier, dataset, and website
.
Neuroimage
,
198
,
181
197
. ,
[PubMed]
Proverbio
,
A. M.
(
2012
).
Tool perception suppresses 10–12 Hz μ rhythm of EEG over the somatosensory area
.
Biological Psychology
,
91
,
1
7
. ,
[PubMed]
Proverbio
,
A. M.
,
Adorni
,
R.
, &
D'Aniello
,
G. E.
(
2011
).
250 ms to code for action affordance during observation of manipulable objects
.
Neuropsychologia
,
49
,
2711
2717
. ,
[PubMed]
Proverbio
,
A. M.
,
Azzari
,
R.
, &
Adorni
,
R.
(
2013
).
Is there a left hemispheric asymmetry for tool affordance processing?
Neuropsychologia
,
51
,
2690
2701
. ,
[PubMed]
Ratcliff
,
G.
, &
Newcombe
,
F.
(
1982
).
Object recognition: Some deductions from the clinical evidence
. In
A. E.
Ellis
(Ed.),
Normality and pathology in cognitive functions
(pp.
147
171
).
London
:
Academic Press
.
Riddoch
,
M. J.
, &
Humphreys
,
G. W.
(
1987
).
A case of integrative visual agnosia
.
Brain
,
110
,
1431
1462
. ,
[PubMed]
Risko
,
E. F.
,
Laidlaw
,
K. E. W.
,
Freeth
,
M.
,
Foulsham
,
T.
, &
Kingstone
,
A.
(
2012
).
Social attention with real versus reel stimuli: Toward an empirical approach to concerns about ecological validity
.
Frontiers in Human Neuroscience
,
6
,
143
. ,
[PubMed]
Romero
,
C. A.
,
Compton
,
M. T.
,
Yang
,
Y.
, &
Snow
,
J. C.
(
2018
).
The real deal: Willingness-to-pay and satiety expectations are greater for real foods versus their images
.
Cortex
,
107
,
78
91
. ,
[PubMed]
Romero
,
C. A.
, &
Snow
,
J. C.
(
2019
).
Methods for presenting real-world objects under controlled laboratory conditions
.
Journal of Visualized Experiments
,
e59762
.
Säfström
,
D.
, &
Edin
,
B. B.
(
2008
).
Prediction of object contact during grasping
.
Experimental Brain Research
,
190
,
265
277
. ,
[PubMed]
Sakata
,
H.
,
Taira
,
M.
,
Kusunoki
,
M.
,
Murata
,
A.
,
Tanaka
,
Y.
, &
Tsutsui
,
K.
(
1998
).
Neural coding of 3D features of objects for hand action in the parietal cortex of the monkey
.
Philosophical Transactions of the Royal Society of London, Series B, Biological Sciences
,
353
,
1363
1373
. ,
[PubMed]
Sakata
,
H.
,
Taira
,
M.
,
Murata
,
A.
, &
Mine
,
S.
(
1995
).
Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey
.
Cerebral Cortex
,
5
,
429
438
. ,
[PubMed]
Salmelin
,
R.
,
Hämäläinen
,
M.
,
Kajola
,
M.
, &
Hari
,
R.
(
1995
).
Functional segregation of movement-related rhythmic activity in the human brain
.
Neuroimage
,
2
,
237
243
. ,
[PubMed]
Schwettmann
,
S.
,
Tenenbaum
,
J. B.
, &
Kanwisher
,
N.
(
2019
).
Invariant representations of mass in the human brain
.
eLife
,
8
,
e46619
. ,
[PubMed]
Sensoy
,
Ö.
,
Culham
,
J. C.
, &
Schwarzer
,
G.
(
2021
).
The advantage of real objects over matched pictures in infants' processing of the familiar size of objects
.
Infant and Child Development
,
e2234
.
Simcock
,
G.
, &
DeLoache
,
J. S.
(
2008
).
The effect of repetition on infants' imitation from picture books varying in iconicity
.
Infancy
,
13
,
687
697
.
Snow
,
J. C.
, &
Culham
,
J. C.
(
2021
).
The treachery of images: How realism influences brain and behavior
.
Trends in Cognitive Sciences
,
25
,
506
519
. ,
[PubMed]
Snow
,
J. C.
,
Pettypiece
,
C. E.
,
McAdam
,
T. D.
,
McLean
,
A. D.
,
Stroman
,
P. W.
,
Goodale
,
M. A.
, et al
(
2011
).
Bringing the real world into the fMRI scanner: Repetition effects for pictures versus real objects
.
Scientific Reports
,
1
,
130
. ,
[PubMed]
Snow
,
J. C.
,
Skiba
,
R. M.
,
Coleman
,
T. L.
, &
Berryhill
,
M. E.
(
2014
).
Real-world objects are more memorable than photographs of objects
.
Frontiers in Human Neuroscience
,
8
,
837
. ,
[PubMed]
Suzuki
,
K.
,
Schwartzman
,
D. J.
,
Augusto
,
R.
, &
Seth
,
A. K.
(
2019
).
Sensorimotor contingency modulates breakthrough of virtual 3D objects during a breaking continuous flash suppression paradigm
.
Cognition
,
187
,
95
107
. ,
[PubMed]
Suzuki
,
M.
,
Noguchi
,
Y.
, &
Kakigi
,
R.
(
2014
).
Temporal dynamics of neural activity underlying unconscious processing of manipulable objects
.
Cortex
,
50
,
100
114
. ,
[PubMed]
Taira
,
M.
,
Mine
,
S.
,
Georgopoulos
,
A. P.
,
Murata
,
A.
, &
Sakata
,
H.
(
1990
).
Parietal cortex neurons of the monkey related to the visual guidance of hand movement
.
Experimental Brain Research
,
83
,
29
36
. ,
[PubMed]
Turnbull
,
O. H.
,
Driver
,
J.
, &
McCarthy
,
R. A.
(
2004
).
2D but not 3D: Pictorial-depth deficits in a case of visual agnosia
.
Cortex
,
40
,
723
738
. ,
[PubMed]
Valyear
,
K. F.
,
Cavina-Pratesi
,
C.
,
Stiglick
,
A. J.
, &
Culham
,
J. C.
(
2007
).
Does tool-related fMRI activity within the intraparietal sulcus reflect the plan to grasp?
Neuroimage
,
36(Suppl. 2)
,
T94
T108
. ,
[PubMed]
Wamain
,
Y.
,
Gabrielli
,
F.
, &
Coello
,
Y.
(
2016
).
EEG μ rhythm in virtual reality reveals that motor coding of visual objects in peripersonal space is task dependent
.
Cortex
,
74
,
20
30
. ,
[PubMed]
Wamain
,
Y.
,
Sahaï
,
A.
,
Decroix
,
J.
,
Coello
,
Y.
, &
Kalénine
,
S.
(
2018
).
Conflict between gesture representations extinguishes μ rhythm desynchronization during manipulable object perception: An EEG study
.
Biological Psychology
,
132
,
202
211
. ,
[PubMed]
Whitwell
,
R. L.
,
Katz
,
N. J.
,
Goodale
,
M. A.
, &
Enns
,
J. T.
(
2020
).
The role of haptic expectations in reaching to grasp: From pantomime to natural grasps and back again
.
Frontiers in Psychology
,
11
,
588428
. ,
[PubMed]
Zaepffel
,
M.
,
Trachel
,
R.
,
Kilavik
,
B. E.
, &
Brochier
,
T.
(
2013
).
Modulations of EEG beta power during planning and execution of grasping movements
.
PLoS One
,
8
,
e60060
. ,
[PubMed]

Author notes

*

Denotes co-first author contribution.