Abstract

There is considerable evidence that there are anatomically and functionally distinct pathways for action and object recognition. However, little is known about how information about action and objects is integrated. This study provides fMRI evidence for task-based selection of brain regions associated with action and object processing, and on how the congruency between the action and the object modulates neural response. Participants viewed videos of objects used in congruent or incongruent actions and attended either to the action or the object in a one-back procedure. Attending to the action led to increased responses in a fronto-parietal action-associated network. Attending to the object activated regions within a fronto-inferior temporal network. Stronger responses for congruent action–object clips occurred in bilateral parietal, inferior temporal, and putamen. Distinct cortical and thalamic regions were modulated by congruency in the different tasks. The results suggest that (i) selective attention to action and object information is mediated through separate networks, (ii) object–action congruency evokes responses in action planning regions, and (iii) the selective activation of nuclei within the thalamus provides a mechanism to integrate task goals in relation to the congruency of the perceptual information presented to the observer.

INTRODUCTION

The ability to manipulate objects and tools is crucial for everyday life. This ability depends on an interaction between the properties of the object and the required action. Object-related actions involve on-line guidance of our effectors in response to these objects, thus object affordances (the potency of the object for action) may be an inherent part of object perception (Tucker & Ellis, 1998; Gibson, 1979). This study is concerned with how we process action, objects, and their relations.

Traditionally, theories have stressed that the retrieval of an appropriate action for an object is guided by access to semantic knowledge based on an object's associations and its abstract function (e.g., Ochipa, Rothi, & Heilman, 1992; Roy & Square, 1985). For instance, a cup activates the action of drinking through access to semantic knowledge based on our prior associations with how cups are used and what they are used for. However, there is increasing neuropsychological and experimental evidence that access to action information can be evoked by the visual properties of objects in a relatively direct way, without the necessary involvement of semantic (associative) memory (see Humphreys et al., 2010; Humphreys & Riddoch, 2003, for reviews; also see Barsalou, 1999, for a similar view derived from a different literature). This evidence provides the backdrop for this study, in which we had participants make 1-back judgments on images of objects being manipulated by congruent or incongruent actions. We examine first the distinct neural correlates of action and object-related processing invoked by attention to actions and to objects. Subsequently, we examine the neural basis of the interaction between action and object pathways driven by the congruency of the action–object pairing.

Neuropsychological research suggests that knowledge of actions (how to manipulate objects) is dissociated from knowledge of objects (what the object is). Ferreira and colleagues (Ferreira, Giusiano, Ceccaldi, & Poncet, 1997) report a patient who was impaired at naming objects but unimpaired at naming actions and who was also able to name the objects from gesture information. Additional patients with damage to the left occipito-temporal cortex, who have the clinical presentation of optic aphasia, are able to gesture the correct action to objects they cannot recognize, even when access to associative semantic information is impaired (Hillis & Caramazza, 1995; Riddoch & Humphreys, 1987). In such cases, vision is not simply used for on-line control of prehensile actions but also to access information about the category of action that can be performed. The results indicate that responding to action-related associations to objects can be dissociated from semantic recognition processes.

In contrast, there are other patients who have an intact access to semantic knowledge of objects, showing intact recognition and naming. These patients can produce correct action in response to the object's name, but they nevertheless fail to correctly act when objects are presented visually (e.g., Pilgrim & Humphreys, 1991; Riddoch, Humphreys, & Price, 1989; DeRenzi, Faglioni, & Sorgato, 1982). In these cases, visual access to semantics is preserved but visual access to action-related responses is blocked, though they are able to access action information via the indirect semantic route (e.g., from the object's name).

Behavioral experiments with healthy participants also provide evidence for direct action-related and indirect semantic-related responses to objects (e.g., Yoon, Humphreys, & Riddoch, 2010; Yoon & Humphreys, 2005, 2007; Chainay & Humphreys, 2002). Decisions about which action to perform on an object are faster when made on pictures of objects than on object names, whereas decisions about the semantic context associated with objects are not affected by the mode of presentation (Chainay & Humphreys, 2002). Also in contrast with semantic categorical decisions, action decisions are not affected by semantic priming but are affected by the orientation of the handle to viewers (Yoon & Humphreys, 2007) and by the correct relative positions of paired objects within an observer's egocentric reference frame (Yoon et al., 2010).

The evidence indicate that access to action and semantic information about objects can dissociate. The interrelations between objects and action, however, can affect responses based on the action and semantic recognition routes (Yoon & Humphreys, 2005). Notably, using stimuli similar to those employed in our study, Yoon and Humphreys found that both action and semantic decisions to objects are facilitated if stimuli are presented with a congruent object grip and if the object is used appropriately compared with when the grip or the action is inappropriate to the object.

We note that the behavioral differences observed for accessing action-related versus semantic contextual knowledge occur even when explicit actions are not made by the participants to the objects. It is nevertheless possible that action-related effects are contingent on the activation of motor actions to objects modulated through the so-called mirror neuron system, commonly involved in action production and recognition. There is evidence that viewing actions evokes a simulated response in motor cortex (for a recent review, Rizzolatti & Sinigaglia, 2010). The neural areas comprising the action observation network (Grafton, 2009) include the bilateral STS, the inferior parietal lobule (IPL), the inferior frontal gyrus (IFG), and the premotor cortex (PM). In addition, the SMA (Dayan et al., 2007; Hamilton & Grafton, 2007), BG, and cerebellum (Blakemore, Frith, & Wolpert, 2001; Wolpert, Miall, & Kawato, 1998) have also been implicated in action simulation. Interestingly, Humphreys et al. (2010) report increased activity in dorsal PM when participants view objects gripped in a congruent relative to an incongruent manner, suggesting that motor-based simulation may also be sensitive to the interaction between action and object information.

Access to action-related information (how/where to grasp objects) and to semantic knowledge (what) has been linked to the dorsal and ventral visual streams (Milner & Goodale, 1995; Ungerleider & Mishkin, 1982). For example, Shmuelof and Zohary (2005) reported that, when participants see objects being manipulated, then recognition of the action is associated with activity in dorsal areas, including the anterior intraparietal sulcus (aIPS); while in contrast, recognition of the object is linked to activity within ventral regions including the fusiform gyrus (e.g., Grill-Spector, 2003; Chao, Haxby, & Martin, 1999; Grill-Spector, Kushnir, Edelman, Itzchak, & Malach, 1998). Consistent with this, the aIPS shows adaptation of activity when the same grasp information is viewed repeatedly, whereas adaptation in the fusiform gyrus depends on the repetition of the identity and form of the object (Shmuelof & Zohary, 2005). The aIPS appears insensitive to the long-term familiarity of the grasp, including whether a grasp is congruent with the correct action (Valyear & Culham, 2009). In other studies, observation of object manipulation has been associated with bilateral activation of the PM along with the IPS (e.g., Tunik, Rice, Hamilton, & Grafton, 2007; Buccino et al., 2001, for reviews), whereas the IFG is bilaterally involved when participants view static images of an object being grasped by a hand irrespective of whether it is held correctly or incorrectly for object use (Johnson-Frey et al., 2003). Valyear and Culham (2009), however, argue that grasp information is also encoded by the ventral stream regions. Using an ROI approach, they report that regions sensitive to objects, body parts, and motions in the occipital–temporal cortex are sensitive to whether stimuli are correctly grasped for actions.

In addition to cases where participants see actions performed on objects, objects that are inherently associated with actions, tools, and manipulable objects are reported to elicit responses in the dorsal action associated network. Specifically, the left anterior supramarginal gyrus (SMG; the rostral part of IPL) is suggested to code motor programs for object use (Peeters et al., 2009). The left anterior SMG, along with the posterior SMG/angular gyrus, is associated with planning the use of familiar objects (Johnson-Frey, Newman-Norlund, & Grafton, 2005). Damage to the SMG (e.g., Sunderland, Wilkins, & Dineen, 2011; Randerath, Goldenberg, Spijkers, Li, & Hermsdoerfer, 2010), along with virtual lesions of this region (following TMS), is linked to poor use of tools. Observation of manipulable objects has also been shown to elicit activity in additional regions including the left inferior frontal lobe, the posterior middle temporal gyrus, and the posterior parietal cortex (e.g., Beauchamp, Lee, Haxby, & Martin, 2002; Devlin et al., 2002; Grèzes & Decety, 2002). However, it is unclear how responses in these regions are affected by attention and the congruency of action–object relations.

In the current study, we investigated the neural correlates of action and object processing and their interaction in response to video clips of objects being used in a congruent or incongruent manner. To selectively evoke action versus object (semantic) processing, we manipulated the attended property of the combined stimuli (similar to Yoon & Humphreys, 2005). On identical video stimuli, participants were asked to perform a 1-back task on the action or on the object. To further facilitate the involvement of the semantic route and to ensure that the comparison across objects could not be based on simple visual features, the object-related 1-back task required object recognition of two different exemplars of an object category (e.g., two different cups). The congruency of the action was manipulated to gain a better understanding of how action and object information is integrated. We manipulated the congruency of the action to the objects. Thus, actions could be congruent with the identity of the object (hitting with a hammer) or incongruent (twisting a hammer). We hypothesized that responses would be elicited in the dorsal action network when participants selectively attended to the action, whereas the ventral object network would be activated by attending to the objects. We further hypothesized that the involvement of these networks would be greater for congruent than for incongruent object–action relations, as congruent action–object relations lend themselves more easily to motor simulations and facilitate action and object recognition (Yoon & Humphreys, 2005). A final question of interest concerned the interaction between task and congruency. What brain areas enable information about the congruency of the action to modulate the object network and are different regions recruited to enable the congruency of the object to modulate the action network? Here we expected to find regions that take information about both objects and actions and respond differentially according to whether this information is congruent or incongruent. We speculated that such regions will be interconnected with the action observation and object-associated networks.

METHODS

Participants

Seventeen (11 women) right-handed healthy adults (averaged age = 24.5 years, SD = 6 years) participated for course credit or cash (£15). None had a prior history of neurological or psychiatric symptoms, and all had normal or corrected-to-normal vision. All participants signed an informed consent after the procedures were explained to them. The study was approved by the local ethics committee.

Stimuli

There were 120 movie clips depicting an animated action performed with an object (Yoon & Humphreys, 2005). Sixty clips depicted an action congruent to the typical way the object is used (e.g., hitting with a hammer), and 60 depicted an incongruent action (e.g., wiping with a hummer; see Figure 1 for still examples extracted form one clip). Each clip lasted about 1.425 sec. Actions are typically associated with a particular grip and the way one grasps an object affect the way it can be used (Johnson-Frey, 2003, 2004; Johnson & Grafton, 2003); therefore, we kept the action–grip relation compatible across different objects. This meant that, in the incongruent condition, both the action and the grip were incongruent with the object on some trials. To account for this variability, the clips were coded according to whether the type of grip was the same (0) or different (1) across the congruent and incongruent displays. In addition, the actions differed in the number of hands (one or two) used, and this was coded as well for each trial. This information was used later as a covariate of no interest in the fMRI analysis (see below).

Figure 1. 

Stimuli examples. Examples of stills extracted from the movie clips (∼1.42 sec) that were presented to the participants during the fMRI experiment. The clips depict pantomime actions performed with different objects. The action could be congruent with the typical way the object is used (chopping with an axe) or incongruent (wiping with an axe). In different blocks, participants performed a 1-back task either on the type of object or on the nature of the action (the way the hand moved).

Figure 1. 

Stimuli examples. Examples of stills extracted from the movie clips (∼1.42 sec) that were presented to the participants during the fMRI experiment. The clips depict pantomime actions performed with different objects. The action could be congruent with the typical way the object is used (chopping with an axe) or incongruent (wiping with an axe). In different blocks, participants performed a 1-back task either on the type of object or on the nature of the action (the way the hand moved).

Experimental Design

A full factorial within-participant design was used, with the following factors: Task (object, action) and Stimulus (congruent, incongruent). We used a 1-back task and manipulated the relevant dimension attended. Thus, participants performed a 1-back task focusing on the objects (ignoring the action being performed) or focusing on the actions (the way the hand moved; in this case, ignoring the object). Note that identical stimuli were used for the two tasks. To ensure that the object task could not be performed based on low-level visual cues only, repetitions were of different exemplars of the same object (e.g., two different hummers). The task was manipulated across blocks, with six blocks per task, and the order of the blocks was randomized. Each block started by presenting the words “focus on hand movements” or “focus on objects” for 3 sec. In addition, an interstimulus fixation shape was used to remind participants of the relevant dimension (“^” for action and “o” for object). Stimulus congruency was manipulated as an event. There were 8–12 events in each block appeared in a random order. Each event started with a 500-msec fixation stimulus, which was followed by a 1425 msec clip depicting actions performed on objects. Events were separated by an ISI of 2–6 sec. There were in total 30 events for each of the condition (e.g., four conditions). The repetition events were relatively rare (16.7%) and were modeled separately in the fMRI design. Importantly, the trials of interests did not require a response; hence, any responses in motor-associated regions cannot be simply attributed to a hand movement. The experiment was realized using Cogent toolbox in Matlab (www.fil.ion.ucl.ac.uk/∼cogent). The experiment was run in two fMRI sessions.

fMRI Data Acquisition

We used a Phillips 3-T Achieva placed in Birmingham University Imaging Center to acquire BOLD, contrast-weighted EPIs. Thirty-eight frontal–temporal oblique slices, 2-mm thickness with a 1.25-mm gap, were acquired in ascending order, with an in-plane resolution of 2 × 2 mm, 80° flip angle, 35 msec echo time, and 2400 msec slice repetition time. Images were acquired using an eight-channel phase array coil with a sense factor of 2.

fMRI Data Analysis

fMRI data was analyzed with SPM8 (www.fil.ion.ucl.ac.uk/∼spm). Preprocessing of the data included correction of head motion (Ashburner & Friston, 2003a) and distortion by head motion interactions (spatial realignment and unwrapped; Andersson, Hutton, Ashburner, Turner, & Friston, 2001), temporal realignment, transforming the data to Montreal Neurological Institute (MNI) space (Ashburner & Friston, 2003b), reslicing to 3 × 3 × 3 mm voxels, and smoothing using a Gaussian kernel with FWDH of 9 × 9 × 9 to account for residual intersubject differences and to adhere to the continuity assumption of random field theory (Worsley & Friston, 1995).

Summary statistics (Penny, Holmes, & Friston, 2003) using the general linear framework were performed to test the reliability of the effects across participants (Kiebel & Holmes, 2003). We first estimated the effect of each condition at the individual level. For each participant, we generated a model that included the onsets of an event of each condition (2 × 2; Task × Congruency); in addition, we modeled as covariates the number of hands used to grip each object and the similarity of the incongruent grip to the grip given on congruent trials. The repetition events were modeled separately to ensure that our results were not affected by the requirement to make a motor response to these trials. To account for the delay in the hemodynamic function, these regressors were convolved with the canonical hemodynamic response function. Finally, the six realignment parameters, the session effect, and the harmonic modeling of slow fluctuations in the signal with a cutoff of 1/128 Hz (typically associated with biological and scanner noise) were also included in each individual's model. The effect size of each condition was estimated for each subject.

To enable generalization of the results, a random effect analysis was conducted, treating participants as a random variable. In this second analysis, maps depicting the effect size per condition per participant were used as the independent variables; dependency between conditions was assumed. We used a mixed peak and threshold approach (Poline, Worsley, Evans, & Friston, 1997), with Z > 3.6 required for a peak and at least 20 voxels showing a Z > 2.69. We focused on results that survive family-wise correction, but for completeness, we mention in the tables and text all clusters that also survived the above threshold. Anatomical labeling was assigned using the Anatomical Automatic Labeling toolbox (Tzourio-Mazoyer et al., 2002) and the Duvernoy Human Brain Atlas (Duvernoy, Cabanis, & Vannson, 1991). The descriptive bar plots in the figures are based on the estimated effect size (beta values) computed in the general linear model. These values were extracted from a 3-mm sphere centered on the group peaks.

RESULTS

Behaviorally, observers found the object task easier than the action task. Responses were also more accurate during the object than the action task (95.07 ± 1.93 and 90.76 ± 1.78, respectively; F(1, 16) = 9.3, p < .01), although note that overall accuracy in both tasks was high. Similarly, accurate responses to repeated objects were carried out more rapidly than those to repeated actions (1038.8 ± 41.29 and 1360.5 ± 40 msec, respectively; F(1, 16) = 24.4, p < .001). The differences in RT may partly be affected by the nature of the stimuli that define actions and objects, because actions but not objects depend on a signal that emerges over time.

More interestingly, task affected the brain response, despite the tasks being performed on identical stimuli (Figure 2; Table 1). Attending to the objects (vs. the action) elicited a larger response in the superior medial frontal gyrus (SmFG) and, at a lower threshold, the object task activated the left inferior temporal gyrus (ITG, MNI: −45 −43 −8; peak-Z = 3.52; only 14 voxels with Z > 2.69). As the reliability of the ITG cluster was below our threshold, this result should be treated with caution. We nevertheless report this activation, as it fits with our original predictions, that is, that attention to objects would involve regions that are part of the ventral visual stream (see Introduction). Furthermore, the ITG was also affected by stimulus congruency (see below, Table 2), which may have attenuated the effect of the task. In contrast to the object attention task, attending to the action (rather than the object) led to an increased response in the action observation network. In the parietal cortex, this included the bilateral SMG extending to the post-central sulcus (CS), plus the bilateral IPS extending to the precuneus; in frontal regions, there was bilateral activation in the pre-CS (below threshold on the left as well) and the right IFG.

Figure 2. 

SPM blobs overlaid on a T1 single-subject template (side columns) and a lateral view of a rendered brain. SPM outputs were threshold at p < .005, uncorrected, with at least 50 voxels. Foci showing a larger response in the object task (Obj) are depicted in red; foci showing larger responses in the action task (Act) are depicted in green. The bars show the averaged effect size extracted from a 3-mm sphere around the group maxima. Error bars depict SEMs. R = right; L = left; 1 = SmFG; 2 = pre-CS; 3 = SMG; 4 = IPS; 5 = IFG.

Figure 2. 

SPM blobs overlaid on a T1 single-subject template (side columns) and a lateral view of a rendered brain. SPM outputs were threshold at p < .005, uncorrected, with at least 50 voxels. Foci showing a larger response in the object task (Obj) are depicted in red; foci showing larger responses in the action task (Act) are depicted in green. The bars show the averaged effect size extracted from a 3-mm sphere around the group maxima. Error bars depict SEMs. R = right; L = left; 1 = SmFG; 2 = pre-CS; 3 = SMG; 4 = IPS; 5 = IFG.

Table 1. 

Main Effects of Task: Action versus Objects

Anatomy
Cluster Size
Peak-Z
x y z (mm)
Main Effect Task: Object > Action 
SmFG 159* 3.53 −9 47 43 
ITG 14 3.52 −45 −43 −8 
 
Main Effect Task: Action > Object 
SMG 246* 4.73* −60 −40 31 
Post-CS/SMG 88 3.46 54 −22 34 
IPS, SPG 99 4.28 12 −70 55 
IPS, SPG 71 3.98 −12 −67 52 
IFG 168* 3.84 57 14 4 
Pre-CS 24 3.27 −21 −1 58 
Pre-CS 50 3.77 30 −4 46 
Anatomy
Cluster Size
Peak-Z
x y z (mm)
Main Effect Task: Object > Action 
SmFG 159* 3.53 −9 47 43 
ITG 14 3.52 −45 −43 −8 
 
Main Effect Task: Action > Object 
SMG 246* 4.73* −60 −40 31 
Post-CS/SMG 88 3.46 54 −22 34 
IPS, SPG 99 4.28 12 −70 55 
IPS, SPG 71 3.98 −12 −67 52 
IFG 168* 3.84 57 14 4 
Pre-CS 24 3.27 −21 −1 58 
Pre-CS 50 3.77 30 −4 46 

Cluster size reliability: #voxels > 150, FEW-corr p < .05; #voxels > 80, uncorr-p < .01; #voxels > 50, uncorr-p < .05.

SPG = superior parietal gyrus.

*At peak/cluster level p(FEW-corrected) < .05.

Table 2. 

Stimulus Effects: Congruent versus Incongruent

Anatomy
Cluster Size
Peak-Z
x y z (mm)
Main Effect Cong: Cong > Incong 
SFG, SMA, BA 6 66 4.2 −12 29 61 
IPL 52 4.05 −51 −55 52 
IPL 49 3.87 54 −55 49 
Putamen/GP 103 3.69 24 −1 1 
Putamen/GP 39 3.18 −24 −7 1 
ITG/FFG 43 3.68 −48 −43 −17 
STS 80 3.57 63 −22 −2 
 
Main Effect Cong: Incong > Cong 
mMOG, lingual G, Cal BA 17, 18 98* 4.1 −15 −97 −2 
Cal, lingual G BA 17 83* 3.67 9 −91 −11 
Anatomy
Cluster Size
Peak-Z
x y z (mm)
Main Effect Cong: Cong > Incong 
SFG, SMA, BA 6 66 4.2 −12 29 61 
IPL 52 4.05 −51 −55 52 
IPL 49 3.87 54 −55 49 
Putamen/GP 103 3.69 24 −1 1 
Putamen/GP 39 3.18 −24 −7 1 
ITG/FFG 43 3.68 −48 −43 −17 
STS 80 3.57 63 −22 −2 
 
Main Effect Cong: Incong > Cong 
mMOG, lingual G, Cal BA 17, 18 98* 4.1 −15 −97 −2 
Cal, lingual G BA 17 83* 3.67 9 −91 −11 

Cluster size reliability: #voxels > 80, uncorr-p < .01; #voxels > 50, uncorr-p < .05. SFG, superior frontal gyrus; GP, globus palidus; FFG, fusiform gyrus; MOG, medial middle occipital gyus; Cal, Calcarine sulcus; Cong, Congruent; Inc, Incongruent.

*At cluster level p(FDR-corrected) < .05.

The congruency of the action and the object also affected neural responses (Figure 3; Table 2). Congruent action–object relation trials changed to affected responses in both the action observation and object-associated regions. Larger responses for congruent versus incongruent were observed in the bilateral IPL, the SmFG aligned with the SMA (BA6), the bilateral putamen including the globus pallidus, and the temporal cortex (in the right STS and the left ITG). In contrast, incongruent trials (vs. congruent) were associated with an increased response in the visual sensory cortices, including the bilateral medial posterior occipital cortex as early as BA 17 and extending to the lingual gyrus and middle occipital gyrus.

Figure 3. 

SPM blobs overlaid on a T1 single subject template (side columns) and a lateral view of a rendered brain. The SPM outputs were thresholded at p < .005, uncorrected, with at least 40 voxels. Foci showing a larger response to incongruent stimuli (Inc) are depicted in blue; foci showing larger responses to the congruent stimuli (Con) are depicted in yellow. The bars show the averaged effect size extracted from a 3-mm sphere around the group maxima. Error bars depict SEMs. R, right; L, left; 1, calcarine sulcus (Cal); 2, STS; 3, SmFG; 4, putamen; 5, IPL angular gyrus; 6, fusiform gyrus (FFG)/ITG.

Figure 3. 

SPM blobs overlaid on a T1 single subject template (side columns) and a lateral view of a rendered brain. The SPM outputs were thresholded at p < .005, uncorrected, with at least 40 voxels. Foci showing a larger response to incongruent stimuli (Inc) are depicted in blue; foci showing larger responses to the congruent stimuli (Con) are depicted in yellow. The bars show the averaged effect size extracted from a 3-mm sphere around the group maxima. Error bars depict SEMs. R, right; L, left; 1, calcarine sulcus (Cal); 2, STS; 3, SmFG; 4, putamen; 5, IPL angular gyrus; 6, fusiform gyrus (FFG)/ITG.

Finally, we also observed regions that were sensitive to the interaction between the task and the stimulus (Figure 4; Table 3). When attending to action, larger congruency effects (Cong > Inc) were observed in the bilateral posterior ventral parts of the thalamus (weaker on the left), corresponding to the ventral-posterior-lateral (VPL) nucleus and the ventral part of the pulvinar. A similar response was also observed in the bilateral posterior occipital cortex. A reverse pattern was observed in the lateral-dorsal (LD) part of the thalamus; in this case, there were stronger congruency effects (Cong > Inc) when participants attended to objects. Greater congruency effects linked to the object task were also observed in the bilateral temporal poles, the right superior frontal sulcus, the cerebellum and the pons.

Figure 4. 

SPM blobs overlaid on a T1 single-subject template (side columns). The SPM outputs were thresholded at p < .005, uncorrected, with at least 50 voxels. Foci showing larger stimulus congruency effects (Cong > Inc) in the object (Obj) compared with the action (Act) task are depicted in red; foci showing the reverse pattern are depicted in green. The bars show the averaged effect size extracted from a 3-mm sphere around the group maxima. Error bars depict SEMs. R, right; L, left; 1, lingual gyrus (Ling); 2, VPL thalamic nucleus; 3, LD thalamic nucleus; 4, TP (temporal pole); 5, SFS (superior frontal sulcus).

Figure 4. 

SPM blobs overlaid on a T1 single-subject template (side columns). The SPM outputs were thresholded at p < .005, uncorrected, with at least 50 voxels. Foci showing larger stimulus congruency effects (Cong > Inc) in the object (Obj) compared with the action (Act) task are depicted in red; foci showing the reverse pattern are depicted in green. The bars show the averaged effect size extracted from a 3-mm sphere around the group maxima. Error bars depict SEMs. R, right; L, left; 1, lingual gyrus (Ling); 2, VPL thalamic nucleus; 3, LD thalamic nucleus; 4, TP (temporal pole); 5, SFS (superior frontal sulcus).

Table 3. 

Task-by-Stimulus Interaction

Anatomy
Cluster Size
Peak-Z
x y z (mm)
INT: Act, Cong > Inc &Obj, Inc < Cong 
Thalamus (VPL, Pul) 62 4.67* −15 −19 −2 
Thalamus (VPL, Pul) 29 4.22 18 −22 −8 
Lingual gyrus/MOG 137** 3.93 −21 −88 −20 
Calcarine (MOG) 137** 3.8 18 −97 1 
 
INT: Obj, Cong > Inc &Obj: Inc > Cong 
Temporal pole 207* 4.68* −39 8 −26 
MTL 117 4.26 12 −1 −23 
Thalamus (LD) 102 4.12 −15 −19 19 
SFS 122 3.93 21 2 37 
Cerebellum (Lobule VI) 49 3.78 6 −67 −26 
Pones 53 3.68 0 −37 −23 
Anatomy
Cluster Size
Peak-Z
x y z (mm)
INT: Act, Cong > Inc &Obj, Inc < Cong 
Thalamus (VPL, Pul) 62 4.67* −15 −19 −2 
Thalamus (VPL, Pul) 29 4.22 18 −22 −8 
Lingual gyrus/MOG 137** 3.93 −21 −88 −20 
Calcarine (MOG) 137** 3.8 18 −97 1 
 
INT: Obj, Cong > Inc &Obj: Inc > Cong 
Temporal pole 207* 4.68* −39 8 −26 
MTL 117 4.26 12 −1 −23 
Thalamus (LD) 102 4.12 −15 −19 19 
SFS 122 3.93 21 2 37 
Cerebellum (Lobule VI) 49 3.78 6 −67 −26 
Pones 53 3.68 0 −37 −23 

Cluster size reliability: #voxels > 150, FEW-corr p < .05; #voxels > 80, uncorr-p < .01; #voxels > 50, uncorr-p < .05. Cong, Congruent; Inc, Incongruent; Act, Action; Obj, Object; VPL, VPL thalamic nucleus; Pul, pulvinar; MOG, middle occipital gyrus; MTL, medial-temporal lobe including amygdala and the hippocampus structures; LD, LD thalamic nucleus; SFS, superior frontal sulcus.

*At peak/cluster level p(FWE-corrected) < .05.

**At cluster level p(FWE-corrected) = .08; p(FDR-corrected) = .05.

To ensure that the neuroanatomical dissociation effects observed in the thalamus were not a by-product of the group analysis (e.g., averaging across different brains and smoothing the data), we analyzed the data at the individual subject level, assessing now the reliability of the effects across trials within each subject. We used unsmoothed data in this analysis. All participants showed a clear neuroanatomical dissociation within the thalamus when attending to actions versus objects that depended also on object–action congruency (see Table 4). Within the left thalamus 12 subjects showed dissociated responses between two different nuclei, one associated with congruent stimuli when actions were attended and the other when objects were attended (compared with the corresponding incongruent conditions); in two subjects, the left thalamus responded only to congruent stimuli when objects were attended; and in three subjects, congruency effects were observed only when actions were attended. Within the right thalamus, nine participants showed congruency responses in two different nuclei that depended on the task, three showed congruency effects only when objects were attended, and three showed congruency responses only when actions were attended. This analysis demonstrated that the neuroanatomical dissociation between different thalamic nuclei, reflecting combined effects of task and stimulus congruency, were reliable also at the individual subject level.

Table 4. 

Thalamic Nuclei Analysis at the Subject Level

Contrast
Thalamus
Cluster Size (Std)
Voxel Z
Voxel p (unc)
Mean MNI (Std)
x
y
z
Congruent object vs. Action Left 49.4 (50.23) 2.3 .017 −14.4 (6.38) −19.1 (6.6) 10.1 (5.23) 
Right 59.5 (107.58) 2.31 .016 11.9 (5.83) −19.9 (10.4) 5.8 (4.31) 
Congruent action vs. object Left 35.9 (54.01) 2.2 .02 −12.4 (5.67) −20.7 (7.75) 3.4 (6.95) 
Right 66.2 (120.81) 2.27 .02 13.5 (5.59) −20.7 (9.43) 7.6 (5.92) 
Contrast
Thalamus
Cluster Size (Std)
Voxel Z
Voxel p (unc)
Mean MNI (Std)
x
y
z
Congruent object vs. Action Left 49.4 (50.23) 2.3 .017 −14.4 (6.38) −19.1 (6.6) 10.1 (5.23) 
Right 59.5 (107.58) 2.31 .016 11.9 (5.83) −19.9 (10.4) 5.8 (4.31) 
Congruent action vs. object Left 35.9 (54.01) 2.2 .02 −12.4 (5.67) −20.7 (7.75) 3.4 (6.95) 
Right 66.2 (120.81) 2.27 .02 13.5 (5.59) −20.7 (9.43) 7.6 (5.92) 

The table presents the averaged of the single subject analyses results observed in the thalamus: descriptive measures (cluster size, MNI coordinates) and statistical tests (Z and p values). The analyses were restricted to the thalamus using a mask generated with WFU Pickatlas.

We next examined the cortical connections of the thalamic nuclei that were affected by both the task and stimulus congruency. This was done using the Thalamic Connectivity Atlas developed by FMRIB (www.fmrib.ox.ac.uk/connect/). We found that the VPL region (MNI: −15 −19 −2), showing a larger congruency effect in the action task, had a high probability of connection to cortical regions involved in action planning and motor control, including the primary motor cortex (.43), the sensory cortex (.42), the PM (.43), and, more weakly, the posterior parietal cortex (.09). In contrast the LD (MNI: −15 −19 19), which showed a greater congruency effect in the object task, had a high probability of connectivity to regions associated with object processing; this included the temporal cortex (.36), ventral pFC (.29), posterior occipital cortex (0.04), and also, more weakly, the regions associated with motor responses (the posterior parietal cortex [.14] and the PM [.04]).

DISCUSSION

Our study investigated the processing of object–action relations by manipulating attention to displays showing actions being used with a congruent or incongruent grip being applied. We observed effects of task, object–action congruency, and an interaction of task and congruency within action and object-associated regions of the brain and also in regions that may coordinate effects of task and congruency. We consider the factors of attention, congruency, and their interaction in turn.

Manipulating attention through task demands activated foci respectively associated with the dorsal and ventral streams, despite the stimuli being identical across the two tasks. Relative to the object task, the action task was associated with activity in the action–observation network; overlapping the dorsal pathway. Specifically, the bilateral SMG, the bilateral IPS, the bilateral pre-CS (below threshold on the left), and the right IFG showed an increased response when participants attended to actions compared with when they attended to objects. Although many studies have shown that observing actions elicit responses in the action observation network (for reviews, Rizzolatti & Sinigaglia, 2010; Grafton, 2009; Decety, 1996), here we show that attending to the action being performed with a stimuli is important for eliciting responses in this network; when the actions were not attended, responses in this network were much reduced (Figure 2).

In contrast with the action task, the object task was associated with activity in the SmFG and, at a lower threshold, with activation of the left ITG. The (weaker) activity in the left ITG is potentially linked to object processing itself, given that this region forms part of the classic ventral visual stream subserving object recognition (Milner & Goodale, 1995). Recruitment of this region is consistent with the N-back task on objects relying in part on activation in the object recognition system. The reason for increased activity in the SmFG is less clear, and future research is needed to understand its role here—for example, is the SmFG recruited to inhibit action information to allow access for object processing? Overall attention to the objects had weaker and less reliable effects on neural responses than attending to the actions. We can only speculate on the reasons for this. First, we note that the object-oriented task was easier than the action task. Typically, task difficulty positively correlates with the extent of the neural responses. Second, the ITG responded also when object–action relations were congruent independent of the task (Tables 2 and 3; Figure 2). This may suggest that object processing as opposed to action processing is affected not only by task instructions but also by congruency. For example, congruent action–object relations may automatically elicit object-related processing, irrespective of the attention condition. Further research is needed to clarify this effect.

Taken together, the results are in accordance with previously reported behavioral results (Yoon & Humphreys, 2005), suggesting that action and object decisions involve separate networks. Action decision relies on a direct action observation fronto-parietal network, including motor planning regions. In contrast, N-back object decisions at least partly recruit regions within the ventral temporal visual stream.

A matching between the action (and grasp) and the object (on congruent trials) increased the responses of both the action observation and object-associated networks. Congruent object–action stimuli generated larger responses (relative to incongruent stimuli) bilaterally in the IPL, the SmFG (most likely SMA), and the putamen including the globus pallidus, the STS, and (at a weaker threshold) the ITG. These results are in accordance with the previous literature. Activation in the IPL have been associated with responses to an action goal during action observation (e.g., Fogassi et al., 2005), the motor representation linked to familiar graspable tools (Vingerhoets, 2008), and coding the postural and spatial relationships between a hand and an object (Sunderland et al., 2011). The evidence for greater activation with congruent (familiar) action–object pairings could fit with each of these proposals, as the goal of an action is easier to compute in the congruent conditions, the motor response associated with the tool will be stronger and the postural and spatial relations of the hand to the specific object will be more familiar. We note that the effects in the IPL appears to be specific to the congruency of an action and an object and not to grasp typicality, as previous research has failed to observe effects of grip congruency in this region (Valyear & Culham, 2009). The reason for this conflicting result is unclear. One difference between the two experiments is the actual action that was executed upon the objects and as a consequence the intentions of the actor differed. In the current study, the action imitated the object's functional use; in Valyear and Culham, the objects were grasped and then lifted in a usual or unusual way. It may be that the aIPS encode viable (congruent) actions, and in Valyear and Culham, action goals and grasping for lifting were always encoded as congruent (or the same action), whether the grasping was typical or atypical. This is in contrast with the current study in which the action and as a consequence the actor's intention varied between the congruent and incongruent trials.

The SmFG (SMA) has been shown to respond to object–grip congruency (Bach, Peelen, & Tipper, 2010); this accords with our observation of the involvement of the SmFG (SMA) when there is a match between the action and the object. Similarly, the posterior STS region has been shown to be involved in action observation, for example, in visual processing of biological motion (for a review, Thompson & Parasuraman, 2012). Single-cell recording in the monkey shows that some neurons in the STS selectively respond when presented with actions executed on objects (Perrett et al., 1989). This area shows reduced activity when either the hand or the object is presented alone (Barraclough, Keith, Xiao, Oram, & Perrett, 2008). Similarly, congruent grip has been shown to modulate responses in regions that are in the vicinity of areas observed here (Valyear & Culham, 2009). In addition, the STS appear to be involved in understanding the intention of actions performed by others (Pelphrey, Morris, & McCarthy, 2004). Again intentions are likely to be easier to interpret when a congruent rather than an incongruent action is performed with an object. The congruency action–object effects that were observed in the left fusiform gyrus/ITG accords with congruent grip effects reported before (Valyear & Culham, 2009). The IPL, SmFG, ITG, and BG are implicated in associating motor responses to a given sensory stimulus (for reviews, Lalazar & Vaadia, 2008; Halsband & Freund, 1993). This network has also been observed when participants are asked to imagine complex daily actions (such as eating a meal; Szameitat, Shen, & Sterr, 2007). These previous findings accords with the increased involvement of the network, observed here, for well-rehearsed and familiar actions made on appropriate objects (congruent trials), as opposed to novel actions made on the same objects (incongruent trials). The novel contribution of this study is that these effects were observed independent of the task demands (attend to the action or the object).

Interestingly, incongruent trials resulted in greater activation of the medial occipital gyrus and the calcarine sulcus compared with congruent trials. This pattern of results is most easily attributed to the greater visual processing being required to encode stimuli when the action and object are incongruent, with greater recruitment of early visual processes in this case. This interpretation accords with the predictive coding account of brain operations (e.g., Friston, 2005). Thus, this model predicts increases in early sensory processing for unfamiliar and unpredicted input, as was the case when the actions were incongruent with the objects.

Along with the main effects of task and congruence, a set of brain regions were activated according to the particular combination of task and congruency. Most notably, there was a double dissociation in the selective activation of the subnuclei of the thalamus for congruent relative to incongruent stimuli in particular tasks; this was observed at the group level and confirmed by a follow-up analyses at the individual subject level. In the VPL nucleus, when participants attended the actions, there was greater activation for congruent relative to incongruent stimuli. In the LD nucleus, there was increased activity for congruent over incongruent trials but only for the object task. The VPL is most connected with regions of the dorsal cortex (particular to motor and premotor regions). The LD is mostly connected with the ventral cortex (temporal, ventral pFC, posterior-occipital cortex). The results suggest that, when attention was directed to the property associated with different cortical regions, there was gating of activation to connected thalamic nuclei so that the VPL was recruited during the action task and the LD during the object task.

These data are consistent with thalamic activity being affected by task constraints (attention to the task-relevant features of the image). Prior work has indicated that the thalamus is sensitive to attention manipulation of specific features of a stimulus, such as the sensitivity of the pulvinar to stimulus location (Snow, Allen, Rafal, & Humphreys, 2009; O'Conner, Fukui, Pinsk, & Kastner, 2002; LaBerge & Buchsbaum, 1990). Here we show that responses in the thalamus are also sensitive to more complex attentional manipulations reflecting the goals of the task (attend to the action or the object). Furthermore, it is interesting that the task-based thalamic dissociations here matched the structural connectivity between these nuclei and the task-relevant cortical regions. Notably, the VPL nucleus was linked to attention to action and connects to motor and PM whereas the LD nucleus, linked to attention to objects, connects strongly to occipito-temporal cortices.

Our second piece of evidence was that thalamic activity was affected by the congruence of object–action relations, although the LD and VPL connect predominantly into ventral and dorsal cortex. There is mounting evidence for cortically based visual and motor responses to objects that are seen being used correctly, relative to when they are used incorrectly, which may reflect the visuomotor familiarity of the action (Roberts & Humphreys, 2010). For example, there is greater visual activity in the LOC, and greater activity putatively related to motor intention in dorsal PM, when objects are grasped correctly relative to when they are grasped incorrectly (Humphreys et al., 2010). We propose that feedback from these regions to the thalamus interacts with the task-based selection of specific thalamic nuclei, enhancing activity in the LD nucleus when the action is congruent with the relevant object task and enhancing activity in the VPL when the object is congruent with the relevant action task. This interaction, then, generates activity reflecting both congruence between objects and actions and task-based recruitment of thalamic nuclei. In this case, the selective activation of thalamic nuclei provides a mechanism by which task-based constraints are integrated with signals reflecting stimulus congruency in the image.

This study was set up to test whether object action association is supported by a direct route to motor regions or an indirect route via semantic knowledge. We first want to say that our results are based on correlation analyses; hence, any interpretation about the necessary structures that supports a specific cognitive procedure is limited and speculative by nature. With that in mind, we believe that the congruency effects observed in both the action and object-related networks suggest that both networks represent objects with their corresponding actions. Because of the correlational nature of this study, we cannot comment whether both networks are essential for accurate execution of an action on an object, but we can say that each contains relevant information for this. Thus, it is plausible that processing in the dorsal and motor-associative cortex is sufficient for executing a correct action without relying on information from the ventral visual stream, as suggested by neuropsychological dissociations (see Introduction). On the other hand, we note that the anterior temporal pole, traditionally associated with semantic knowledge, was associated with stimulus congruency only when attention was directed to the object (Table 3). This suggests that semantic knowledge of an object includes action knowledge, but this information is accessed only when it is task relevant. Therefore, it may also be possible to guide a correct action based on knowledge represented in the ventral visual stream, but this process may depend on the involvement of attention.

In summary, the present data show that there is selective processing of action- and object-related properties of the same image in action- and object-related regions of cortex, according to the type of information weighted in the tasks, along with increased activity in different regions for congruent over incongruent action–object pairs. In addition, there is evidence that different thalamic subnuclei are recruited by the contrasting tasks, although their activity is then modulated by action–object congruence. The data suggest that the activity of thalamic nuclei is driven by the task as well as by differential stimulus signals reflecting object–action congruence.

Acknowledgments

This work was supported by grants from the BBSRC (G. W. H.), the Leverhulme Trust (P. R.), and European Union FP7 (Cogwatch, 288912).

Reprint requests should be sent to Eun Young Yoon or Pia Rotshtein, School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, United Kingdom, or via e-mail: kntc@daum.net, P.Rotshtein@bham.ac.uk.

REFERENCES

Andersson
,
J. L.
,
Hutton
,
C.
,
Ashburner
,
J.
,
Turner
,
R.
, &
Friston
,
K. J.
(
2001
).
Modeling geometric deformations in EPI time series.
Neuroimage
,
13
,
903
919
.
Ashburner
,
J.
, &
Friston
,
K. J.
(
2003a
).
Rigid body transformation.
In R. S. Frackowiak, K. J. Friston, C. Frith, R. J. Dolan, C. Price, S. Zeki, et al. (Eds.)
,
Human brain function
(pp.
635
654
).
Oxford
:
Academic Press
.
Ashburner
,
J.
, &
Friston
,
K. J.
(
2003b
).
Spatial normalization using basis functions.
In R. S. Frackowiak, K. J. Friston, C. Frith, R. J. Dolan, C. Price, S. Zeki, et al. (Eds.)
,
Human brain function
(pp.
655
672
).
Oxford
:
Academic Press
.
Bach
,
P.
,
Peelen
,
M. V.
, &
Tipper
,
S. P.
(
2010
).
On the role of object information in action observation: An fMRI study.
Cerebral Cortex
,
20
,
2798
2809
.
Barraclough
,
N. E.
,
Keith
,
R. H.
,
Xiao
,
D.
,
Oram
,
M. W.
, &
Perrett
,
D. I.
(
2008
).
Visual adaptation to goal-directed hand actions.
Journal of Cognitive Neuroscience
,
21
,
1805
1819
.
Barsalou
,
L. W.
(
1999
).
Perceptual symbol systems.
Behavioral and Brain Sciences
,
22
,
577
660
.
Beauchamp
,
M. S.
,
Lee
,
K. E.
,
Haxby
,
J. V.
, &
Martin
,
A.
(
2002
).
Parallel visual motion processing streams for manipulable objects and human movements.
Neuron
,
34
,
149
159
.
Blakemore
,
S. J.
,
Frith
,
C. D.
, &
Wolpert
,
D. M.
(
2001
).
The cerebellum is involved in predicting the sensory consequences of action.
NeuroReport
,
3129
,
1879
1884
.
Buccino
,
G.
,
Binkofski
,
F.
,
Fink
,
G. R.
,
Fadiga
,
L.
,
Fogassi
,
L.
,
Gallese
,
R. J.
,
et al
(
2001
).
Action observation activates premotor and parietal areas in a somatotopic manner: An fMRI study.
European Journal of Neuroscience
,
13
,
400
404
.
Chainay
,
H.
, &
Humphreys
,
G. W.
(
2002
).
Privileged access to action for objects relative to words.
Psychonomic Bulletin Review
,
9
,
348
355
.
Chao
,
L. L.
,
Haxby
,
J. V.
, &
Martin
,
A.
(
1999
).
Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects.
Nature Neuroscience
,
210
,
913
919
.
Dayan
,
E.
,
Casile
,
A.
,
Levit-Binnun
,
N.
,
Giese
,
M. A.
,
Hendler
,
T.
, &
Flash
,
T.
(
2007
).
Neural representations of kinematic laws of motion: Evidence for action-perception coupling.
Proceedings of the National Academy of Sciences, U.S.A.
,
10451
,
20582
20587
.
Decety
,
J.
(
1996
).
Neural representations for action.
Reviews in the Neurosciences
,
74
,
285
297
.
DeRenzi
,
E.
,
Faglioni
,
P.
, &
Sorgato
,
P.
(
1982
).
Modality-specific and supramodal mechanisms of apraxia.
Brain
,
105
,
301
312
.
Devlin
,
J. T.
,
Moore
,
C. J.
,
Mummery
,
C. J.
,
Gorno-Tempini
,
M. L.
,
Phillips
,
J. A.
,
Nopppeney
,
P. U.
,
et al
(
2002
).
Anatomic constraints on cognitive theories of category specificity.
Neuroimage
,
15
,
675
685
.
Duvernoy
,
H. M.
,
Cabanis
,
E. A.
, &
Vannson
,
J. L.
(
1991
).
The human brain: Surface three-dimensional sectional anatomy and MRI.
Wien
:
Springer-Verlag
.
Ferreira
,
C. T.
,
Giusiano
,
B.
,
Ceccaldi
,
M.
, &
Poncet
,
M.
(
1997
).
Optic aphasia: Evidence of the contribution of different neural systems to object and action naming.
Cortex
,
33
,
499
513
.
Fogassi
,
L.
,
Francesco
,
P.
,
Gesierich
,
B.
,
Rozzi
,
S.
,
Chersi
,
F.
, &
Rizzolatti
,
G.
(
2005
).
Parietal lobe: From action organization to intention understanding.
Science
,
308
,
662
668
.
Friston
,
K.
(
2005
).
A theory of cortical responses.
Philosophical Transactions of the Royal Society of London, Series B, Biological Sciences
,
360
,
815
836
.
Gibson
,
J. J.
(
1979
).
The ecological approach to perception.
Boston
:
Houghton Mifflin
.
Grafton
,
S. T.
(
2009
).
Embodied cognition and the simulation of action to understand others.
Annals of the New York Academy of Sciences
,
1156
,
97
117
.
Grèzes
,
J.
, &
Decety
,
J.
(
2002
).
Does visual perception of object afford action? Evidence from a neuroimaging study.
Neuropsychologia
,
40
,
212
222
.
Grill-Spector
,
K.
(
2003
).
The neural basis of object perception.
Current Opinion in Neurobiology
,
13
,
159
166
.
Grill-Spector
,
K.
,
Kushnir
,
T.
,
Edelman
,
S.
,
Itzchak
,
Y.
, &
Malach
,
R.
(
1998
).
Cue-invariant activation in object-related areas of the human occipital lobe.
Neuron
,
21
,
191
202
.
Halsband
,
U.
, &
Freund
,
H. J.
(
1993
).
Motor learning.
Current Opinion in Neurobiology
,
36
,
940
949
.
Hamilton
,
A. F.
, &
Grafton
,
S. T.
(
2007
).
Action outcomes are represented in human inferior frontoparietal cortex.
Cerebral Cortex
,
185
,
1160
1168
.
Hillis
,
A.
, &
Caramazza
,
A.
(
1995
).
Cognitive and neural mechanisms underlying visual and semantic processing: Implication from “optic aphasia.”
Journal of Cognitive Neuroscience
,
7
,
457
478
.
Humphreys
,
G. W.
, &
Riddoch
,
M. J.
(
2003
).
From visual to action and action to vision: A convergent route approach to vision action and attention.
Psychology of Learning and Motivation
,
42
,
225
264
.
Humphreys
,
G. W.
,
Yoon
,
E. Y.
,
Kumar
,
S.
,
Lestou
,
V.
,
Kitadono
,
K.
,
Roberts
,
K. L.
,
et al
(
2010
).
The interaction of attention and action: From seeing action to acting on perception.
British Journal of Psychology
,
101
,
185
206
.
Johnson
,
S. H.
, &
Grafton
,
S. T.
(
2003
).
From “acting on” to “acting with”: The functional anatomy of object-oriented action schemata.
Progress in Brain Research
,
142
,
127
139
.
Johnson-Frey
,
S. H.
(
2003
).
What's so special about human tool use?
Neuron
,
39
,
201
204
.
Johnson-Frey
,
S. H.
(
2004
).
The neural bases of complex tool use in humans.
Trends in Cognitive Sciences
,
82
,
71
78
.
Johnson-Frey
,
S. H.
,
Maloof
,
F. R.
,
Newman-Norlund
,
R.
,
Farrer
,
C.
,
Inati
,
S.
, &
Grafton
,
S. T.
(
2003
).
Actions or hand-object interactions? Human inferior frontal cortex and action observation.
Neuron
,
39
,
1053
1058
.
Johnson-Frey
,
S. H.
,
Newman-Norlund
,
R.
, &
Grafton
,
S. T.
(
2005
).
A distributed left hemisphere network active during planning of everyday tool use skills.
Cerebral Cortex
,
15
,
681
695
.
Kiebel
,
S.
, &
Holmes
,
A.
(
2003
).
The general linear model.
In R. S. Frackowiak, K. J. Friston, C. Frith, R. J. Dolan, C. Price, S. Zeki, et al. (Eds.)
,
Human brain function
(pp.
725
760
).
Oxford
:
Academic Press
.
LaBerge
,
D.
, &
Buchsbaum
,
M. S.
(
1990
).
Positron emission tomographic measurements of pulvinar activity during an attention task.
Journal of Neuroscience
,
102
,
613
619
.
Lalazar
,
H.
, &
Vaadia
,
E.
(
2008
).
Neural basis of sensorimotor learning: Modifying internal models.
Current Opinion in Neurobiology
,
186
,
573
581
.
Milner
,
A. D.
, &
Goodale
,
M.
(
1995
).
The visual brain in action.
London
:
Academic Press
.
Ochipa
,
C.
,
Rothi
,
L. J.
, &
Heilman
,
K. M.
(
1992
).
Conceptual apraxia in Alzheimer's disease.
Brain
,
115
,
1061
1071
.
O'Conner
,
D. H.
,
Fukui
,
M. M.
,
Pinsk
,
M. A.
, &
Kastner
,
S.
(
2002
).
Attention modulates responses in the human lateral geniculate nucleus.
Nature Neuroscience
,
511
,
1203
1209
.
Peeters
,
R.
,
Simone
,
L.
,
Nelissen
,
K.
,
Fabbri-Destro
,
M.
,
Vabduffel
,
W.
,
Rizzolatti
,
G.
,
et al
(
2009
).
The representation of tool use in humans and monkeys: Common and uniquely human features.
Journal of Neuroscience
,
2937
,
11523
11539
.
Pelphrey
,
K. A.
,
Morris
,
J. P.
, &
McCarthy
,
G.
(
2004
).
Grasping the intentions of others: The perceived intentionality of an action influences activity in the superior temporal sulcus during social perception.
Journal of Cognitive Neuroscience
,
16
,
1706
1716
.
Penny
,
W.
,
Holmes
,
A.
, &
Friston
,
K. J.
(
2003
).
Random effects analysis.
In R. S. Frackowiak, K. J. Friston, C. Frith, R. J. Dolan, C. Price, S. Zeki, et al. (Eds.)
,
Human brain function
(pp.
843
850
).
Oxford
:
Academic Press
.
Perrett
,
D. I.
,
Harries
,
M. H.
,
Bevan
,
R.
,
Thomas
,
S.
,
Benson
,
P. J.
, &
Mistlin
,
A. J.
(
1989
).
Frameworks of analysis for the neural representation of animate objects and actions.
Journal of Experimental Biology
,
146
,
87
113
.
Pilgrim
,
E.
, &
Humphreys
,
G. W.
(
1991
).
Impairment of action to visual objects in a case of ideomotor apraxia.
Cognitive Neuropsychology
,
8
,
459
473
.
Poline
,
J. B.
,
Worsley
,
K. J.
,
Evans
,
A. C.
, &
Friston
,
K. J.
(
1997
).
Combining spatial extent and peak intensity to test for activations in functional imaging.
Neuroimage
,
5
,
83
96
.
Randerath
,
J.
,
Goldenberg
,
G.
,
Spijkers
,
W.
,
Li
,
Y.
, &
Hermsdoerfer
,
J.
(
2010
).
Different left brain regions are essential for grasping a tool compared with its subsequent use.
Neuroimage
,
53
,
171
180
.
Riddoch
,
M. J.
, &
Humphreys
,
G. W.
(
1987
).
Visual object processing in optic aphasia: A case of semantic access agnosia.
Cognitive Neuropsychology
,
4
,
131
185
.
Riddoch
,
M. J.
,
Humphreys
,
G. W.
, &
Price
,
C. J.
(
1989
).
Route to action: Evidence from apraxia.
Cognitive Neuropsychology
,
6
,
137
454
.
Rizzolatti
,
G.
, &
Sinigaglia
,
C.
(
2010
).
The functional role of the parieto-frontal mirror circuit: Interpretations and misinterpretations.
Nature Reviews Neuroscience
,
114
,
264
274
.
Roberts
,
K. L.
, &
Humphreys
,
G. W.
(
2010
).
Action relationships concatenate representations of separate objects in the ventral visual system.
Neuroimage
,
52
,
1541
1548
.
Roy
,
E. A.
, &
Square
,
P. A.
(
1985
).
Common considerations in the study of limb verbal and oral apraxia.
In E. A. Roy (Ed.)
,
Neuropsychological studies of apraxia and related disorders
(pp.
112
162
).
Amsterdam
:
North-Holland
.
Shmuelof
,
L.
, &
Zohary
,
E.
(
2005
).
Dissociation between ventral and dorsal fMRI activation during object and action recognition.
Neuron
,
47
,
457
470
.
Snow
,
J. C.
,
Allen
,
H. A.
,
Rafal
,
R. D.
, &
Humphreys
,
G. W.
(
2009
).
Impaired attentional selection following lesions to human pulvinar: Evidence for homology between human and monkey.
Proceedings of the National Academy of Sciences, U.S.A.
,
10610
,
4054
4059
.
Sunderland
,
A.
,
Wilkins
,
L.
, &
Dineen
,
R.
(
2011
).
Tool use and action planning in apraxia.
Neuropsychologia
,
49
,
1275
1286
.
Szameitat
,
A. J.
,
Shen
,
S.
, &
Sterr
,
A.
(
2007
).
Motor imagery of complex everyday movements: An fMRI study.
Neuroimage
,
342
,
702
713
.
Thompson
,
J.
, &
Parasuraman
,
R.
(
2012
).
Attention, biological motion, and action recognition.
Neuroimage
,
59
,
4
13
.
Tucker
,
M.
, &
Ellis
,
R.
(
1998
).
On the relations between seen objects and components of potential actions.
Journal of Experimental Psychology: Human Perception and Performance
,
243
,
830
846
.
Tunik
,
E.
,
Rice
,
N. J.
,
Hamilton
,
A.
, &
Grafton
,
S. T.
(
2007
).
Beyond grasping: Representation of action in human anterior intraparietal sulcus.
Neuroimage
,
36
,
T77
T86
.
Tzourio-Mazoyer
,
N.
,
Landeau
,
B.
,
Papathanassiou
,
D.
,
Crivello
,
F.
,
Etard
,
O.
,
Delcroix
,
N.
,
et al
(
2002
).
Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain.
Neuroimage
,
15
,
273
289
.
Ungerleider
,
L. G.
, &
Mishkin
,
M.
(
1982
).
Two cortical visual systems.
In D. J. Ingle, M. A. Goodale, & R. J. W. Mansfield (Eds.)
,
Analysis of visual behaviour
(pp.
549
586
).
Cambridge, MA
:
MIT Press
.
Valyear
,
K. F.
, &
Culham
,
J. C.
(
2009
).
Observing learned object-specific functional grasps preferentially activates the ventral stream.
Journal of Cognitive Neuroscience
,
22
,
970
984
.
Vingerhoets
,
G.
(
2008
).
Knowing about tools: Neural correlates of tool familiarity and experience.
Neuroimage
,
40
,
1380
1391
.
Wolpert
,
D. M.
,
Miall
,
R. C.
, &
Kawato
,
M.
(
1998
).
Internal models in the cerebellum.
Trends in Cognitive Sciences
,
2
,
338
347
.
Worsley
,
K. J.
, &
Friston
,
K. J.
(
1995
).
Analysis of fMRI time-series revisited—Again.
Neuroimage
,
2
,
173
181
.
Yoon
,
E. Y.
, &
Humphreys
,
G. W.
(
2005
).
Direct and indirect effects of action on object classification.
Memory & Cognition
,
33
,
1131
1146
.
Yoon
,
E. Y.
, &
Humphreys
,
G. W.
(
2007
).
Dissociative effects of viewpoint and semantic priming on action and semantic decisions: Evidence for dual routes to action from vision.
Quarterly Journal of Experimental Psychology
,
60
,
601
623
.
Yoon
,
E. Y.
,
Humphreys
,
G. W.
, &
Riddoch
,
M. J.
(
2010
).
The paired-object affordance effect.
Journal of Experimental Psychology: Human Perception and Performance
,
36
,
812
824
.