Abstract

Some studies have reported that understanding concrete action-related words and sentences elicits activations of motor areas in the brain. The present fMRI study goes one step further by testing whether this is also the case for comprehension of nonfactual statements. Three linguistic structures were used (factuals, counterfactuals, and negations), referring either to actions or, as a control condition, to visual events. The results showed that action sentences elicited stronger activations than visual sentences in the SMA, extending to the primary motor area, as well as in regions generally associated with the planning and understanding of actions (left superior temporal gyrus, left and right supramarginal gyri). Also, we found stronger activations for action sentences than for visual sentences in the extrastriate body area, a region involved in the visual processing of human body movements. These action-related effects occurred not only in factuals but also in negations and counterfactuals, suggesting that brain regions involved in action understanding and planning are activated by default even when the actions are described as hypothetical or as not happening. Moreover, some of these regions overlapped with those activated during the observation of action videos, indicating that the act of understanding action language and that of observing real actions share neural networks. These results support the claim that embodied representations of linguistic meaning are important even in abstract linguistic contexts.

INTRODUCTION

According to the embodiment approach to meaning, language is grounded in the sensory motor world. In other words, the same perceptual, motor, and emotional brain mechanisms used to process real-world experience are involved to some extent in the processing of linguistic meaning (Barsalou, Santos, Simmons, & Wilson, 2008; Glenberg, Sato, Cattaneo, Palumbo, & Buccino, 2008; Zwaan & Taylor, 2006). This view is opposed to symbolic approaches, which claim that even if sensory motor processes in the brain are elicited by words, linguistic meaning is fundamentally amodal, abstract, and disembodied (cf. Mahon & Caramazza, 2008). In recent years, the embodiment approach to language has received considerable support from behavioral and neuroscience studies. For example, it has been found that the processing time of action sentences is differentially modulated by the simultaneous performance of a motor movement that either matches or mismatches the described action (Glenberg & Kaschak, 2002), suggesting that the processing of action meaning and the performing of movements share brain processes. Consistent with these findings, fMRI studies have shown that action words elicit quite specific activations in the motor and premotor cortices and that these activations seem to occur automatically. For instance, verbs like “whistling” activate somatotopic brain regions partially overlapping with those involved in real mouth motion, whereas verbs like “grasping” activate somatotopic regions involved in hand motion (e.g., Buccino et al., 2005; Hauk, Johnsrude, & Pulvermüller, 2004). Similar findings have been reported for the comprehension of action sentences (Urrutia, Gennari, & de Vega, 2012; Moody & Gennari, 2010; Tettamanti et al., 2008; Aziz-Zadeh, Wilson, Rizzolatti, & Iacoboni, 2006; Rizzolatti & Craighero, 2004).

Several neuroimaging studies have linked the comprehension of action language with the functioning of the mirror neuron system (Meister & Iacoboni, 2007; Aziz-Zadeh et al., 2006). For example, Aziz-Zadeh et al. (2006) demonstrated that reading sentences describing hand actions activated the same premotor brain areas as watching videos that showed an actress performing a manual action. In the same vein, TMS studies revealed that understanding action language modulates cortico-spinal excitability when specific somatotopic areas, also involved in performing and observing actions, are stimulated (Candidi, Leone-Fernandez, Barber, Carreiras, & Aglioti, 2010; Tomasino, Fink, Sparing, Dafotakis, & Weiss, 2008; Buccino et al., 2005). This led these researchers to conclude that mirror neurons are involved in the comprehension of action language.

Other studies have not found clear evidence that action language activates somatotopic motor or premotor regions, let alone mirror neurons. Instead, they report that action language triggers activations in regions associated with high-order motor processes such as planning, controlling, and understanding actions. These are mainly the inferior parietal lobe, the SMA, and the precentral region (Tremblay & Small, 2011; Desai, Binder, Conant, & Seidenberg, 2010; Rueschemeyer, Pfeiffer, & Bekkering, 2010; Raposo, Moss, Stamatakis, & Tyler, 2009; Rueschemeyer, van Rooij, Lindemann, Willems, & Bekkering, 2009; Postle, McMahon, Ashton, Meredith, & de Zubicaray, 2008). Finally, some studies suggest that action language engages visual networks in addition to motor networks, eliciting activations in areas, which are engaged during the visual processing of body movements (Desai et al., 2010; Rueschemeyer et al., 2010; Wallentin, Lund, Ostergaard, Ostergaard, & Roepstorff, 2005).

To conclude, many studies have shown that the comprehension of action-related language elicits activations in different sensory motor neural networks partially overlapping with those involved in performing and observing these actions. But in many of these studies, there was a bias toward using simple materials with concrete factual meaning, focusing on isolated action verbs or, at most, on short sentences describing factual actions. Only recently some studies have employed action-related sentences embedded into more complex narratives (e.g., Wallentin et al., 2011; Deen & McCarthy, 2010). The goal of this study was to further investigate the role of linguistic context on motor activation, employing abstract linguistic constructions that render concrete action events “unreal.” In particular, we studied action sentences with a negative or counterfactual structure, that is, referring to events that either did not happen or were hypothetical. It is not known whether embodied representations are elicited in the same or a different way as factual sentences during comprehension. For instance, while reading the negative sentence “Maria did not cut the bread,” it could be unnecessary to carry out a motor simulation of the action “cutting,” because the action is not being performed in the situation being described. Some studies suggest that this is the case. Thus, neuroimaging experiments have shown that action-related affirmative sentences activate motor and premotor brain regions, whereas matched negative sentences do not (Tomasino, Weiss, & Fink, 2010; Tettamanti et al., 2008), and TMS studies have demonstrated that action-related affirmative sentences modulate the motor cortico-spinal excitability, reflected in the size of motor-evoked potentials, whereas their negative counterparts do not modulate cortico-spinal excitability (Liuzza, Candidi, & Aglioti, 2011; Schütz-Bosbach, Avenanti, Aglioti, & Haggard, 2008). However, it is also possible that embodied representations of negated concepts are initially built and then later suppressed, as suggested by some behavioral studies (e.g., Kaup, Yaxley, Madden, Zwaan, & Lüdtke, 2007). Counterfactual sentences, such as “If I had bought that lottery ticket, I would have won a million dollars,” are another case of linguistic structures describing unreal events. Counterfactuals involve a paradoxical dual meaning: They have a realistic meaning, which is an implicit negation (“I didn't buy the ticket or win a million dollars”), but at the same time they invite the reader/listener to consider the events “as if” they had happened (“I bought the ticket and won a million dollars”). Recently, researchers have been paying attention to how counterfactuals are understood, exploring the temporal course of their dual meaning activation by means of reading times (Stewart, Haigh, & Kidd, 2009; Ferguson & Sanford, 2008; de Vega, Urrutia, & Riffo, 2007). They found that, after reading a counterfactual sentence, two alternative representations of the events momentarily coexist. Moreover, the understanding of action-related counterfactuals seems to activate motor processes in the brain, interfering with the planning of a simultaneous motor response (de Vega & Urrutia, 2011), and counterfactual sentences describing high-effort actions (moving the sofa) elicit more BOLD activation than low-effort actions (moving the picture), especially in the left inferior parietal lobe, a region responsible for planning and understanding actions (Urrutia et al., 2012).

The present research attempts to systematically contrast the neural processes underlying action language embedded in negation, counterfactual, or factual contexts by means of fMRI. The complex semantics of negations and counterfactuals makes the study of their neural specificity a potentially interesting topic in itself. For instance, we might expect that the act of understanding counterfactual sentences elicits activations in more extensive brain networks than the act of understanding factual sentences, because the two alternative representations compete with each other, and thus may engage prefrontal inhibition or control processes. Consistent with this hypothesis, it has been found that patients with Parkinson's disease or pFC lesions (often medially located) are impaired in counterfactual generation and reasoning (Gómez-Beldarrain, García-Monco, Astigarraga, González, & Grafman, 2005; McNamara, Durso, Brown, & Lynch, 2003); also, clusters of activation in the SMA and the ACC have been observed in an fMRI study of healthy participants understanding counterfactual sentences (Urrutia et al., 2012).

However, the main focus of the present research is to explore whether embodied representations are activated by negations and counterfactuals in the same or a different way as by factuals. One possibility is that they share an embodied semantics; namely, that action language activates by default the same motor neural network, regardless of the status of reality conveyed by the particular linguistic structure. Another possibility is that the semantics of negations and counterfactuals differentially modulate the activity of the motor neural network when understanding action language. For instance, negations might inhibit or suppress motor activations, whereas counterfactuals, like factuals, would trigger activation in this network, because one aspect of understanding counterfactual meaning entails representing the events, at least momentarily, as if they were real. If this is the case, our findings would support the claim that embodied representations are context dependent rather than automatically triggered by action-related language.

A second goal of this study is to test whether the comprehension of action language activates brain regions that overlap with the sensory motor network involved in action observation. We know that observing videos of another's manual actions activates motor, premotor, and parietal regions in the brain (e.g., Binkofski & Buccino, 2006; Shmuelof & Zohary, 2006) as well as extrastriate temporo-occipital regions that play a role in the visual encoding of body motions (Urgesi, Candidi, Ionta, & Aglioti, 2007; Downing, Peelen, Wiggett, & Tew, 2006; Calvo-Merino, Glaser, Grèzes, Passingham, & Haggard, 2005; Michels, Lappe, & Vaina, 2005). Moreover, some of these activations could be considered to reflect the activity of the human mirror neuron system, given that the same regions are activated both in action performance and in action observation (Aziz-Zadeh et al., 2006; Buccino et al., 2005; Rizzolatti & Craighero, 2004). Therefore, if we find overlapping activations in action language and action observation, this could be considered evidence that language comprehension partially relies on the mirror neuron system, which presumably is involved in understanding others' actions.

With these goals in mind, we asked our participants to perform two different tasks. The first was a reading comprehension task in which they were given factual, negative, or counterfactual paragraphs, each consisting of an antecedent clause describing a character in a simple scenario, and a consequent clause describing the character either doing a manual action or watching an object (see Table 1). To explore the action effects in language, we used the vision-related clauses as a contrasting condition, rather than abstract language or resting states. This is a very strict contrasting criterion, because both action and vision events take place in concrete scenarios, and involve concrete experiences referring to objects. They differ, however, in that the former involve object manipulation, whereas the latter involve passive observation.

Table 1. 

Examples of Linguistic Materials in Spanish and Their Approximate Translation into English

ACTION LANGUAGE 
Factual: Como ha sido mi cumpleaños / he desenvuelto los regalos 
Given that it was my birthday / I unwrapped the gifts 
Negation: Como no era mi cumpleaños / no desenvolví los regalos 
Given that it was not my birthday / I didn't unwrap the gifts 
Counterfactual: Si hubiera sido mi cumpleaños / habría desenvuelto los regalos 
If it had been my birthday / I would have unwrapped the gifts 
 
VISUAL LANGUAGE 
Factual: Como he estado en la Gran Avenida / me he fijado en la escultura. 
Given that I was on Main Avenue / I noticed the sculpture 
Negation: Como no estuve en la Gran Avenida / no me fijé en la escultura 
Given that I wasn't on Main Avenue / I didn't notice the sculpture 
Counterfactual: Si hubiera estado en la Gran Avenida / me habría fijado en la escultura. 
If I had been on Main Avenue / I would have noticed the sculpture 
 
NONSENSIBLE (GO) 
Como he subido a un avión / he puesto la sartén al fuego. 
Given that I've boarded the plane / I've put the pan on the stove 
Como no subí a la montaña / no toqué el violín. 
Given that I have not climbed the mountain / I didn't play the violin. 
Si hubiera estado en la selva / habría esquiado en la nieve. 
If I had been in the tropical forest / I would have skied in the snow. 
 
PSEUDOSENTENCE (GO) 
Gaza je trompre an di boreba / len borel birte. 
ACTION LANGUAGE 
Factual: Como ha sido mi cumpleaños / he desenvuelto los regalos 
Given that it was my birthday / I unwrapped the gifts 
Negation: Como no era mi cumpleaños / no desenvolví los regalos 
Given that it was not my birthday / I didn't unwrap the gifts 
Counterfactual: Si hubiera sido mi cumpleaños / habría desenvuelto los regalos 
If it had been my birthday / I would have unwrapped the gifts 
 
VISUAL LANGUAGE 
Factual: Como he estado en la Gran Avenida / me he fijado en la escultura. 
Given that I was on Main Avenue / I noticed the sculpture 
Negation: Como no estuve en la Gran Avenida / no me fijé en la escultura 
Given that I wasn't on Main Avenue / I didn't notice the sculpture 
Counterfactual: Si hubiera estado en la Gran Avenida / me habría fijado en la escultura. 
If I had been on Main Avenue / I would have noticed the sculpture 
 
NONSENSIBLE (GO) 
Como he subido a un avión / he puesto la sartén al fuego. 
Given that I've boarded the plane / I've put the pan on the stove 
Como no subí a la montaña / no toqué el violín. 
Given that I have not climbed the mountain / I didn't play the violin. 
Si hubiera estado en la selva / habría esquiado en la nieve. 
If I had been in the tropical forest / I would have skied in the snow. 
 
PSEUDOSENTENCE (GO) 
Gaza je trompre an di boreba / len borel birte. 

The manipulation of content was constrained to the second clause (in bold), whereas the manipulation of structure involved both clauses. Examples of nonsensible sentences and pseudosentences are also shown (GO response).

The second task consisted of watching short videos depicting manual actions that were similar, although not identical, to those described by the linguistic materials. As a contrasting condition, participants observed pictures of nonmanipulable objects. Although the action language and the action observation tasks differed considerably, we expected that both would show overlapping brain regions of activation, which would support the embodiment semantic approach to linguistic meaning. Moreover, if these shared brain activations occurred, they would support the involvement of the mirror neuron system in action language. Most importantly, the purpose of this task was to verify whether the neural commonality between action observation and action language is modulated differentially by the factual, negative, and counterfactual construals.

METHODS

Participants

Nineteen healthy right-handed Spanish native speakers with normal or corrected-to-normal vision participated in the experiment (16 women, 3 men; mean age = 24 years). All participants were attending graduate or postgraduate courses at the University of La Laguna, Spain. All gave written informed consent and were paid for their participation. The experiment was conducted in accordance with the rules established by the University Ethics Committee.

Materials and Design

Linguistic Task

A repeated-measures factorial design involving 2 Content (action and visual) × 3 Structure (factual, negation, and counterfactual) × 2 Clause (first and second) was employed. The linguistic material consisted of 180 paragraphs written in Spanish, 90 of which referred to actions (action language condition) and 90 to visual content (visual language condition). For each type of content, three different versions of the paragraphs were written: factual, negation, and counterfactual. Example sentences are displayed in Table 1. Each paragraph consisted of two clauses written in the first person. The first clause described the scenario in which the character was situated, starting with a consecutive connective in factuals (Como…/given that…) and in negations (Como no…/given that I didn't…), or with a conditional followed by a subjunctive in counterfactuals (Si hubiera…/if I had…). The second clause described the action or visual event that was the consequence of the event described in the first clause. The number of words was the same for the three versions of each paragraph, although counterfactual paragraphs were slightly longer in number of characters or syllables.

To select the materials, two normative studies had been conducted. In the first normative study, nouns representing relatively small, man-made objects were given to the participants with the instruction to judge their manipulability on a 7-point rating scale. High-manipulability objects included manual tools (spoon, bottle, screwdriver, etc.), whereas low-manipulability objects were objects to look at or listen to rather than touch (traffic light, loudspeaker, paintings, etc). The second normative study involved sentences describing actions with high-manipulability objects. The participants' task was to judge how familiar the actions were and how frequently they performed them. Although familiarity and frequency correlate in some cases the two scores markedly differ. For instance, participants consider “sewing” quite familiar although they do not practice it frequently.

On the basis of the normative studies, we selected 90 actions employing high-manipulability objects (M = 5.84, SD = 1.98) that were both familiar and frequent in the participants' repertoire. We also selected 90 visual events involving low-manipulability objects (M = 2.42; SD = 1.2). For each visual and action content, we wrote a factual sentence, a negation and a counterfactual sentence. Sentences with manipulable objects described manual actions with these objects. Sentences with nonmanipulable objects described visual interactions with the objects. Notice that the manipulation of content was constrained to the second clause, which describes either an action or a visual experience, whereas the manipulation of the linguistic structure affected both the first and the second clause. The action and visual second clauses were matched in number of syllables (M = 8.6, SD = 1.1 and M = 8.9, SD = 1.5, respectively; F(1, 178) = 3.28, p = .07) and in the nouns' lexical frequencies in the action and the visual second clauses (M = 58, SD = 92 and M = 39, SD = 72, respectively; F(1, 178) = 2.53, p = .11). The action verbs were less frequent than the visual verbs in the second clause, although the difference did not reach statistical significance (M = 81.6, SD = 153, and M = 128, SD = 204, respectively; F(1, 178) = 2.99, p = .085). One reason for this differential trend in verbs frequency is that the Spanish lexicon has few purely visual verbs with relatively high frequency (ver /to see; observar /to observe; mirar /to watch; fijarse /to notice; contemplar /to contemplate), whereas there are dozens of manipulative action verbs, most of them with relatively low frequency. In addition to the 180 experimental sentences, 18 nonsensible sentences and 10 sentences composed of pseudowords were created to check participant's comprehension, as will be explained later.

Perceptual (Nonlinguistic) Task

Four sets of visual stimuli were created for this study: 30 action videos, 30 pictures of nonmanipulable objects, 30 pictures of manipulable objects, and 30 scrambled images (although only the first two conditions were relevant for the current research). The videos were 4-sec long each and showed an actress' hands performing simple, familiar actions (opening a bottle, writing on paper, hammering a nail, etc.). The stimuli for the nonmanipulable objects consisted of static pictures, also shown for 4 sec each, of the corresponding objects. Some examples are shown in the Figure 1. The actions and objects in the videos were similar, although not identical, to those employed in the action language material. In the same vein, the nonmanipulable objects were similar, although not identical, to the nonmanipulable objects referred to in the visual language material.

Figure 1. 

Examples of materials in the perceptual task. The left-hand images correspond to representative frames of two action videos, and the right-hand images are pictures of two nonmanipulable objects.

Figure 1. 

Examples of materials in the perceptual task. The left-hand images correspond to representative frames of two action videos, and the right-hand images are pictures of two nonmanipulable objects.

fMRI Data Collection and Procedure

Images were obtained with a 3T GE Sigma Excite MRI scanner, in the Magnetic Resonance Service for Biomedical Research at the University of La Laguna. Functional images were obtained using a gradient-echo EPI sequence (repetition time = 2000 msec, echo time = 50 msec, flip angle = 90°, matrix = 64 × 64, field of view = 24 cm) with 625 volumes, corresponding to 32 axial slices of 3.5 mm thickness for the linguistic task and 171 volumes for the perceptual task. The first six volumes of each run were discarded to allow for T1 equilibration effects. Low-frequency signal changes and baseline drifts were removed by applying a temporal high-pass filter to remove frequencies lower than 1/128 Hz. All functional images were also resliced to match the source (anatomical T1) image voxel by voxel. This process produced images with a voxel size of 2 × 2 × 2 mm. Finally, images were normalized to Montreal Neurological Institute (MNI) space, and a tridimensional spatial smoothing using a Gaussian kernel with FWHM of 8 mm was applied.

Before the brain scanning sessions, participants were brought into a scanner simulator where they received instructions and training for the comprehension task. Participants were instructed to read the factual, negative, and counterfactual paragraphs for comprehension and perform a go/no-go task. Namely, each time they noticed a nonsensible sentence or a sentence with pseudowords, they had to press a key with their right-hand index finger (go trials), otherwise they were not to respond. This procedure aimed to encourage participants to read for comprehension. Importantly, the experimental sentences did not require any motor response (no-go trials), which otherwise could interfere with the expected “embodied” processes in sentence comprehension. After the training session, the participants were given information on safety norms and were placed in the scanner; they then donned goggles, through which the stimuli were to be presented, and earplugs. Also, they were asked to remain relaxed and motionless throughout the session. Finally, they were instructed on how to use the response box, after which the experiment started. There were three scanning phases performed consecutively in the same session:

  • Linguistic task. An event-related design was used for this task. Each participant received 180 experimental sentences: 30 action factual, 30 action negation, 30 action counterfactual, 30 visual factual, 30 visual negation, and 30 visual counterfactual. Also 28 implausible or pseudoword filler sentences were added. The assignment of the three structures to each content was counterbalanced across participants. Namely, a given paragraph was presented as factual for some participants, as negative for others, and as counterfactual for the rest. The 208 sentences were presented in random order for each participant in two consecutive runs of 104 sentences each. Each sentence was presented according to the following sequence: first, a fixation point was displayed in the center of the screen for a jittering blank interval of 3–5 sec; the first and second clauses were then presented for 1500 msec each, with a variable jittering blank interval of 2–3 sec between them and, finally, a 1000-msec blank followed to allow participants to provide their responses in the go/no-go task, as explained before.

  • Perceptual task. A block design was employed for the perceptual task. Participants received eight blocks of 15 stimuli each, which were presented continuously in random order. Only four blocks of stimuli were relevant for this study: two blocks of videos and two blocks of pictures of nonmanipulable objects. Each block lasted 60 sec. Between blocks, there were resting periods of 18 sec each.

  • Whole-brain structural images. In the final scanning phase, whole-brain structural images were obtained for each participant (1 × 1 × 1.3 mm resolution), while they remained motionless with their eyes closed. These images were acquired with the following parameters: echo time = 3.87 sec, repetition time = 9.44 sec, flip angle = 7°, and field of view = 256 × 256 × 160 mm. This yielded 167 contiguous 1.3-mm-thick slices. This stage lasted about 10 min.

fMRI Data Analysis

Preprocessing and statistical analyses were performed using SPM tools. Art-Global and Art-Slice tools were employed for motion correction, implemented in MATLAB 7.0 (MathWorks, Inc., Natick, MA). The anatomical T1 was coregistered with the mean of EPIs (Collignon et al., 1995). The parameters for normalization of the anatomical image were used to transform the functional scans to MNI space. Finally, all the images were smoothed by means of a Gaussian smoothing kernel, FWHM 8 mm.

The software used for the statistical analyses of the functional images was the SPM8 parametric-statistics mapping tool (www.fil.ion.ucl.ac.uk/spm/, FIL Methods Group, University College of London), implemented in MATLAB 7.0.4, and using a univariate massive approach based on the general linear model. Statistical parametric maps were generated by modeling a univariate general linear model, using for each stimulus type a regressor obtained by convolving the canonical hemodynamic response function with delta functions at stimulus onsets, and also including the six motion correction parameters as regressors. Parameters of the general linear model were estimated with a robust regression using weighted least squares that also corrected for temporal autocorrelation in the data (Diedrichsen & Shadmehr; www.icn.ucl.ac.uk/motorcontrol/imaging/robustWLS.html). Three sets of analyses were conducted with the fMRI data:

  • a. Whole-brain analyses for the perceptual task data. We performed the contrast [action videos > nonmanipulable objects] aiming to obtain the brain regions activated during action observation, controlling basic perceptual processes (object recognition).

  • b. Whole-brain analyses for the linguistic task data. For the language data, several contrasts were computed: (1) action language and visual language activations were computed for the second clauses, collapsing the linguistic format [action factual + action negation + action counterfactual] versus [vision factual + vision negation + vision counterfactual]; (2) counterfactual-related activations, collapsing the content and the two clauses to produce two relevant contrasts: [action counterfactual + vision counterfactual] versus [action factual + vision factual] and [action counterfactual + vision counterfactual] versus [action negation + vision negation]; (3) negation-related activations, collapsing the content and the two clauses to produce again two contrasts: [action negation + vision negation] versus [action factual + vision factual] and [action negation + vision negation] versus [action counterfactual + vision counterfactual].

Population-level inferences were tested using the SPM8 random effects model that estimated the second level t statistic at each voxel. Most clusters reported in the study reached the corrected threshold of p < .05 (false discovery rate), as shown in the Tables 2 and 3. However, the analysis of structure shown in Table 4 did not reach the corrected threshold criterion, and given the theoretical importance of these contrasts, we decided to employ an uncorrected threshold of p < .001 and a cluster extent higher than 100 voxels, as a combined criterion to keep an appropriate balance between Type I and Type II errors (Lieberman & Cunningham, 2009). All local maxima were reported as MNI coordinates.

  • c. Overlapping regions between the language and the perceptual tasks. To examine whether the act of understanding action language shares neural networks with the act of watching action videos, we checked the overlap between the areas with significant activations in the action language task (action language > visual language in the second clause) and the areas with significant activations in the action observation task (action videos > nonmanipulable objects).

Table 2. 

Brain Regions of Activation for the Contrast Videos versus Nonmanipulable Objects (NMO) for the Whole Brain (Cluster Significant at the Corrected Threshold p < .05)

Region
BA
Cluster Size
Z Score
x y z
Video > NMO 
R middle occipital gyrus 37 7159 6.34 46 −72 4 
 R middle temporal gyrus 37 6.33 52 −54 6 
 R middle temporal gyrus 22 6.12 60 −42 4 
R precentral gyrus 268 4.21 32 −10 56 
L inferior parietal lobe 2, 6 5443 5.97 −30 −46 56 
 L middle occipital gyrus 37 5.80 −44 −70 12 
 L middle temporal gyrus 21 5.37 −48 −54 12 
L lingual gyrus 18 1558 5.11 −18 −74 −22 
 L lingual gyrus 18 5.08 −8 −78 0 
 L cuneus 18 4.97 −12 −90 16 
L superior frontal gyrus 218 4.19 −24 −8 56 
L precentral gyrusa 166 4.35 −58 4 28 
L cerebellum 218 4.64 −26 −70 −50 
 
NMO > Video 
R fusiform gyrus 37 1043 5.70 26 −40 −16 
L fusiform gyrus 37 500 5.22 −28 −40 −12 
R lingual gyrusa 17 246 3.96 12 −52 6 
L calcarine gyrusa 30 132 3.63 −12 −52 8 
Region
BA
Cluster Size
Z Score
x y z
Video > NMO 
R middle occipital gyrus 37 7159 6.34 46 −72 4 
 R middle temporal gyrus 37 6.33 52 −54 6 
 R middle temporal gyrus 22 6.12 60 −42 4 
R precentral gyrus 268 4.21 32 −10 56 
L inferior parietal lobe 2, 6 5443 5.97 −30 −46 56 
 L middle occipital gyrus 37 5.80 −44 −70 12 
 L middle temporal gyrus 21 5.37 −48 −54 12 
L lingual gyrus 18 1558 5.11 −18 −74 −22 
 L lingual gyrus 18 5.08 −8 −78 0 
 L cuneus 18 4.97 −12 −90 16 
L superior frontal gyrus 218 4.19 −24 −8 56 
L precentral gyrusa 166 4.35 −58 4 28 
L cerebellum 218 4.64 −26 −70 −50 
 
NMO > Video 
R fusiform gyrus 37 1043 5.70 26 −40 −16 
L fusiform gyrus 37 500 5.22 −28 −40 −12 
R lingual gyrusa 17 246 3.96 12 −52 6 
L calcarine gyrusa 30 132 3.63 −12 −52 8 

aCluster significant at the uncorrected threshold p < .001 and cluster size > 100.

Table 3. 

Brain Regions of Activation in the Contrasts Action Language versus Visual Language, Second Clause (Cluster Significant at the Corrected Threshold p < .05)

Region
BA
Cluster Size
Z Score
x y z
Action > Visual 
L supplementary motor area 3262 4.06 −4 −6 64 
 R supplementary motor area 4, 6 4.98 6 −26 58 
 L precentral gyrus 4.06 −40 −6 56 
L Hippocampus 37 1296 3.72 −24 −28 −8 
 L middle temporal pole 38 3.47 −46 10 −28 
L superior temporal lobe 41 1000 3.86 −48 −34 20 
 L superior temporal lobe 41 3.47 −40 −36 20 
 L inferior parietal lobe 40 3.61 −48 −38 40 
L middle occipital lobe 19 227 4.01 −50 −74 2 
L middle occipital lobe 19 3.16 −42 −82 12 
R supramarginal gyrusa 48 231 3.93 56 −44 28 
R middle temporal gyrusa 37 498 3.71 52 −70 10 
 R middle temporal gyrusa 37 3.32 44 −60 4 
 R middle temporal gyrusa 21 3.26 56 −44 0 
R superior temporal gyrusa 48 138 4.37 50 −14 −8 
R middle occipital gyrusa 39 156 3.69 42 −76 30 
R superior temporal polea 38 211 3.49 44 10 −24 
 
Visual > Action 
R inferior parietal lobe 40 1589 3.80 32 −50 48 
L superior parietal lobea 751 3.64 −20 −54 48 
L cuneusa 18 252 3.92 −10 −72 28 
R cuneusa 18 3.25 6 −72 32 
R posterior cingulate cortexa 29 123 3.87 12 −38 12 
L inferior occipital lobea 19 248 3.59 −30 −82 −2 
Region
BA
Cluster Size
Z Score
x y z
Action > Visual 
L supplementary motor area 3262 4.06 −4 −6 64 
 R supplementary motor area 4, 6 4.98 6 −26 58 
 L precentral gyrus 4.06 −40 −6 56 
L Hippocampus 37 1296 3.72 −24 −28 −8 
 L middle temporal pole 38 3.47 −46 10 −28 
L superior temporal lobe 41 1000 3.86 −48 −34 20 
 L superior temporal lobe 41 3.47 −40 −36 20 
 L inferior parietal lobe 40 3.61 −48 −38 40 
L middle occipital lobe 19 227 4.01 −50 −74 2 
L middle occipital lobe 19 3.16 −42 −82 12 
R supramarginal gyrusa 48 231 3.93 56 −44 28 
R middle temporal gyrusa 37 498 3.71 52 −70 10 
 R middle temporal gyrusa 37 3.32 44 −60 4 
 R middle temporal gyrusa 21 3.26 56 −44 0 
R superior temporal gyrusa 48 138 4.37 50 −14 −8 
R middle occipital gyrusa 39 156 3.69 42 −76 30 
R superior temporal polea 38 211 3.49 44 10 −24 
 
Visual > Action 
R inferior parietal lobe 40 1589 3.80 32 −50 48 
L superior parietal lobea 751 3.64 −20 −54 48 
L cuneusa 18 252 3.92 −10 −72 28 
R cuneusa 18 3.25 6 −72 32 
R posterior cingulate cortexa 29 123 3.87 12 −38 12 
L inferior occipital lobea 19 248 3.59 −30 −82 −2 

aCluster significant at the uncorrected threshold of p < .001 and cluster size > 100.

Table 4. 

Brain Regions of Activation in the Contrasts Comparing the Linguistic Structures Using the Whole Paragraphs in the Analysis (p < .001, Uncorrected; Cluster Size > 100)

Region
BA
Cluster Size
Z Score
x y z
Counterfactual > Factual 
L supplementary motor area 317 3.60 −6 20 58 
L supplementary motor area 3.32 −8 6 66 
L supplementary motor area 3.11 −10 4 60 
L precentral gyrus 279 3.96 −38 −2 54 
 L precentral gyrus 3.08 −30 −14 56 
R precentral gyrus 148 3.45 44 −8 58 
 
Negation > Factual 
L middle temporal gyrus 21 529 3.88 −58 −42 −2 
L middle temporal gyrus 37 214 3.62 −48 −50 −22 
R inferior frontal gyrus 47 101 3.94 34 30 −14 
R middle temporal gyrus 21 131 3.57 54 −24 −4 
 
Counterfactual > Negation 
R precentral gyrus 197 4.27 44 −8 58 
L supplementary motor area 152 3.13 −2 8 56 
L lingual gyrus 18 206 3.20 −10 −66 −4 
R lingual gyrus 19 227 3.10 18 −58 −6 
Region
BA
Cluster Size
Z Score
x y z
Counterfactual > Factual 
L supplementary motor area 317 3.60 −6 20 58 
L supplementary motor area 3.32 −8 6 66 
L supplementary motor area 3.11 −10 4 60 
L precentral gyrus 279 3.96 −38 −2 54 
 L precentral gyrus 3.08 −30 −14 56 
R precentral gyrus 148 3.45 44 −8 58 
 
Negation > Factual 
L middle temporal gyrus 21 529 3.88 −58 −42 −2 
L middle temporal gyrus 37 214 3.62 −48 −50 −22 
R inferior frontal gyrus 47 101 3.94 34 30 −14 
R middle temporal gyrus 21 131 3.57 54 −24 −4 
 
Counterfactual > Negation 
R precentral gyrus 197 4.27 44 −8 58 
L supplementary motor area 152 3.13 −2 8 56 
L lingual gyrus 18 206 3.20 −10 −66 −4 
R lingual gyrus 19 227 3.10 18 −58 −6 

RESULTS

Behavioral Results

Two types of behavioral data were obtained from the go/no-go task for each participant: omission errors in the go trials (lack of response in pseudosentences and nonsensible sentences) and false alarms in the no-go trials (responses in the sensible sentences). The low rate of omission errors in the not analyzed go trials (about 1%) indicates that participants correctly judged pseudosentences and nonsensible sentences. The rates of false alarms were also very low and did not differ among the structures (factuals: 1.56%; counterfactuals: 0.7%; negations: 1.73%; F(2, 36) = 1.88, p = .17), contents (visual: 1.1%; action: 1.56%; F(1, 18) = 1.35, p = .26), or Structure × Content interaction (F(2, 36) = .81, p = .45).

Perceptual Task Effects

The results of the contrast [action videos > nonmanipulable objects] are displayed in Table 2 and Figure 2. In the right hemisphere, there were large clusters of significant activations in the precentral region and the middle temporal gyrus, extending to the superior temporal and the middle occipital cortex. In the left hemisphere, there were clusters of activations in the middle temporal and the superior temporal gyri, the inferior parietal lobule including the supramarginal gyrus, the lingual gyrus, the cuneus, and the frontal superior gyrus. The opposite contrast [nonmanipulable object > action videos] produced activations in the left and right fusiform gyri and in calcarine and lingual regions. The voxels contained into the clusters reported as significant represent 7.8% of the total number of voxels that participate in the analysis.

Figure 2. 

Brain regions activated by action observation (videos > nonmanipulable objects), action language (action > visual), and overlapping regions are shown. The size of activation was thresholded at p < .001, uncorrected, in clusters of at least 100 voxels. The main overlapping regions were in the left middle occipital cortex corresponding to the EBA, in the left superior temporal lobe, close to the TPJ, in the left precentral, and in the left and right supramarginal gyri. These action-related activations were independent of the linguistic structure (no Content × Structure interaction was obtained). Generally, visual language elicited less activation than action language, as the activation plots in these regions illustrate. Bar graphs indicate contrast estimates for each language condition relative to zero using the peak of coordinates of highest activation (or deactivation) inside the overlapping area: F = factual; CF = counterfactual; N = negation; V = visual; A = action.

Figure 2. 

Brain regions activated by action observation (videos > nonmanipulable objects), action language (action > visual), and overlapping regions are shown. The size of activation was thresholded at p < .001, uncorrected, in clusters of at least 100 voxels. The main overlapping regions were in the left middle occipital cortex corresponding to the EBA, in the left superior temporal lobe, close to the TPJ, in the left precentral, and in the left and right supramarginal gyri. These action-related activations were independent of the linguistic structure (no Content × Structure interaction was obtained). Generally, visual language elicited less activation than action language, as the activation plots in these regions illustrate. Bar graphs indicate contrast estimates for each language condition relative to zero using the peak of coordinates of highest activation (or deactivation) inside the overlapping area: F = factual; CF = counterfactual; N = negation; V = visual; A = action.

Linguistic Task Effects

Action Language versus Visual Language

As reported in the Data analysis section, the higher-level analysis for the whole brain was performed on the second clause, and it revealed several significant clusters sensitive to action language in comparison with visual language. An extensive group of voxels was observed in the SMA (BA 4 and BA 6) bilaterally, extending to the left precentral gyrus. Groups of voxels were also activated in the left hippocampus extending to the left middle temporal pole, in the left middle occipital lobe, and in the left superior temporal gyrus, extending to the left inferior parietal lobe. There were also small clusters of activation in the left inferior frontal gyrus (LIFG) associated with action language, which did not reach statistical effects in the whole-brain analysis. However, when we used LIFG anatomical ROIs (pars orbicularis, pars orbitalis, and pars triangularis) as masks, we found a significant overlapping cluster (p < .05, corrected false discovery rate) in the pars orbitalis (−50 20 −6). When the statistical threshold was relaxed (p < .001 and cluster size > 100), some theoretically relevant clusters of activation were found in the right hemisphere: supramarginal gyrus, middle and superior temporal gyrus, temporal pole extending to the inferior frontal gyrus, and the middle occipital gyrus. The reverse comparison [visual > action language] showed significant clusters of activations in the right parietal lobe, the left superior parietal lobe, the cuneus, the posterior cingulate gyrus, and the left inferior occipital lobe (see Table 3).

Overlap between Action Language and Action Observation

Several clusters of overlapping voxels were observed when action language and action observation were contrasted, as Figure 2 illustrates. These overlapping clusters comprise the left superior temporal gyrus near the TPJ, the left and the right supramarginal gyri, the left precentral gyrus, and the left middle occipital gyrus in the extrastriate body area (EBA). As can be seen, the overlapping regions showed more activation for action language than for visual language (second clause), and this trend was virtually the same in the three linguistic structures under study. In other words, the action-related effects in language were not differentially modulated by factuals, negations, and counterfactuals.

Language Structure Effects

Counterfactuals elicited more activations in the SMA and in the precentral gyrus when compared both with factuals and with negations. Negations elicited more activations in the left middle temporal gyrus, the right inferior frontal gyrus, and the right middle temporal gyrus in comparison with factuals (Table 4). No significant difference emerged in the contrast [negations > counterfactuals].

DISCUSSION

The issue of embodied semantics of action language is not a new topic in neuroimaging research. However, this study analyzed for the first time how action language embedded in complex discourse is processed in the brain. In particular, it explored whether linguistic structures like negations and counterfactuals, which describe “unreal” events, modulate the activation of motor regions in a similar or a different way than factual language. In a nutshell, we obtained two remarkable results. First, we found that action language elicited sensory motor activations, and these were comparable for the three linguistic structures explored here. Second, the comprehension of action language shared numerous underlying neural processes with the observation of actions, although the two tasks markedly differed in surface characteristics.

Action Language in the Brain

In comparison with visual language, action language elicited an extensive cluster of activations in the SMA bilaterally, extending to the left precentral gyrus in the motor cortex. The SMA and pre-SMA have been traditionally associated with the planning or selection of complex motor responses (Mostofsky & Simmonds, 2008; Lee, Chang, & Roh, 1999; Gerloff, Corwell, Chen, Hallett, & Cohen, 1997), although they might also contribute to the control processes in action execution (Nachev, Wydell, O'Neill, Husain, & Kennard, 2007). Some studies have also reported activations in the pre-SMA during the processing of action language (Tremblay & Small, 2011; Rueschemeyer et al., 2010; Postle et al., 2008), and these authors interpret these activations as high-order conceptual processes rather than embodied representations of actions. Notice, however, that in this study we found activation in the SMA proper rather than the pre-SMA. The SMA proper is associated with planning and executing complex motor responses (Mostofsky & Simmonds, 2008; Gerloff et al., 1997). But, why was the SMA activated here during the comprehension of action language, which does not demand motor responses? A possible explanation derives from a recent study reporting that some neurons in the SMA have “antimirror” properties, namely they subserve inhibitory processes when people observe, but do not imitate, others' actions (Keysers & Gazzola, 2010; Mukamel, Ekstrom, Kaplan, Iacoboni, & Fried, 2010). This inhibitory role of some SMA neurons fits well with the task demands of action language; in other words, SMA would facilitate the understanding of the actions being referred to while actively inhibiting their performance.

In this study, the LIFG played a relatively modest role in our action language task, in comparison with other results reported in the literature of the field (e.g., Raposo et al., 2009; Moody & Gennari, 2010; Tettamanti et al., 2008). This can be due in part to the fact that, unlike in these studies, we employed visual language rather than abstract language as contrasting condition. Moreover, the activation of the LIFT, pars triangularis, not always has been found in action language (e.g., Tremblay & Small, 2011), or it was associated with words referring to mouth actions, rather than words referring to hand actions (Tettamanti et al., 2008; Hauk et al., 2004). Beyond the motor brain, there were other significant clusters of activation involved in understanding action language. The hippocampus, traditionally associated with the storage and retrieval of episodic memories, was more active in the tasks involving action language than in those involving visual language. The involvement of the hippocampus in action observation (Rumiati, Papeo, & Corradi-Dell'Acqua, 2010; Decety et al., 1997) and action language processing (Moody & Gennari, 2010; Raposo et al., 2009) has been reported previously. Moreover, Mukamel et al. (2010) reported that the hippocampus and the parahippocampal gyrus were functionally associated with the SMA, exhibiting similar mirror or antimirror properties. Therefore, the recording of neural resonance in all these regions suggests the presence of an integrated mirror neuron functional network operating in action language understanding beyond the traditional frontoparietal circuitry.

Visual language, in contrast with action language, exhibited clusters of activations in the right inferior parietal lobe, the superior parietal lobe, the cuneus, and the posterior cingulate cortex, which are regions that have been associated with visual imagery, visual attention, and visuospatial cognition (Thompson, Slotnick, Burrage, & Kosslyn, 2009; Kanwisher & Wojciulik, 2000).

Action Language Shares Sensory Motor Activations with Action Observation

The strongest support for the embodied semantics approach found in this study is the fact that some regions in the motor system, which are activated during action observation, also are involved in language comprehension, despite the fact that the two tasks differ in many superficial aspects. Both observing and understanding actions elicited bilateral activation in the inferior parietal regions extending into the supramarginal gyri. A large body of evidence indicates that the inferior parietal lobe and the supramarginal gyrus are involved in planning object-directed hand actions. Thus, neuroimaging and TMS studies investigating action planning have shown that inferior parietal structures respond strongly to planning tool actions, viewing or naming tools, reading descriptions of actions, and evaluating the manipulability of objects (Culham & Valyear, 2006; Johnson-Frey, Newman-Norlund, & Grafton, 2005; Noppeney, Josephs, Kiebel, Friston, & Price, 2005; Grezes & Decety, 2001). Therefore, the current finding of activation in the supramarginal cortices indicates that object-directed motor plans and object manipulation knowledge were recruited in action observation as well as in action language comprehension, confirming similar findings elsewhere (Urrutia et al., 2012; Desai et al., 2010; Moody & Gennari, 2010; Tettamanti et al., 2008). Also, an overlapping activation in the left superior temporal gyrus, at the TPJ, was involved in action observation and action language. The TPJ is usually associated with mentalizing processes in humans, but it also interacts closely with areas concerned with biological motion patterns (Thompson et al., 2009; Shetreet, Palti, Friedmann, & Hadar, 2007; Wu, Waller, & Chatterjee, 2007). Finally, there was a cluster of overlapping activation in the left precentral gyrus with coordinates compatible with the somatotopic motor cortex for hand/arm actions (overlapping activation peak: −24 −9 56), similar to the activations reported in other studies with hand/arm action verbs (see Kemmerer & Gonzalez-Castillo, 2010, for a review). This replication is remarkable, because in this study participants did not process isolated manual action verbs but action descriptions embedded in two-clause sentences.

Beyond the motor brain, there were also significant activations in the right and left middle occipital gyri, shared with action observation in the left hemisphere. This region corresponds to the EBA, which is responsible for the visual encoding of body parts and body actions (Urgesi et al., 2007; Downing et al., 2006; Calvo-Merino et al., 2005; Michels et al., 2005). The human brain has developed specialized regions in the visual cortex to analyze signals of high biological value; this is the case of the FFA responsible for visually encoding faces and the EBA for visually encoding body parts and body motions. After all, humans can encode and understand others' body actions absent in their own motor repertoire (e.g., we encode professional dancers' performances even if we are far from being able to imitate them). Furthermore, it is also reasonable to believe that EBA plays a role for encoding not only observed actions but also described actions, as has been reported recently (Rueschemeyer et al., 2010; Saygin, McCullough, Alac, & Emmorey, 2009; Wallentin et al., 2005). In the present case, the statistical effects we obtained for action language (action > visual) do not necessarily mean any substantial activation of EBA, but they could be driven by greater suppression of activity during the processing of visual language. The inspection of the bar plot in the top part of Figure 2 indicates, in fact, that only a small BOLD response appears in action-related negations and counterfactuals in contrast with the strong suppression in the comparable vision-related conditions.

Action Language and Language Structures

A cursory inspection of the bar diagrams in Figure 2 reveals that factuals, negations and counterfactuals do not differentially modulate action-related activations. The three linguistic construals exhibit similar patterns of activation in the selected anatomical regions. One trivial explanation for this apparent lack of differences is that the sentences may have been processed superficially without fleshing out the factual, negative, or counterfactual meanings, given that the task demand was to detect bizarre sentences (the nonsensible GO fillers), rather than establishing fine semantic distinctions among the three construals. We can rule out this explanation, however, because negations and counterfactuals produced their own specific brain activations, independently of their visual or action content. Thus, counterfactuals, in comparison with both factuals and negations, activated a network of neurons in the left SMA, an area involved in the selection and inhibition processes of complex actions but also in dealing with alternatives in decision-making (see similar results in Urrutia et al., 2012). The multiple meanings of counterfactual sentences are very likely the reason for the activations observed in medial prefrontal structures. Negations, for their part, also exhibit specific activations in comparison with factuals, in the left temporal gyrus (see also Carpenter, Just, Keller, Eddy, & Thulborn, 1999). In summary, the three sentence constructions were understood properly, and the shared action-related effects can be considered genuine.

These findings seem to conflict with the idea that embodied representations are context dependent. For instance, some studies reported that these representations take place for isolated action verbs or linguistic contexts with concrete and factual meanings, but not for idioms (Cacciari et al., 2011; Raposo et al., 2009). In the same vein, suppression of motor processes has been reported in action-related negations in comparison with their affirmative counterparts (Liuzza et al., 2011; Tomasino et al., 2010; Schütz-Bosbach et al., 2008; Tettamanti et al., 2008). Instead, we offer evidence that negations and counterfactual construals that refer to nonoccurring or hypothetical events could also activate embodied representations of these events. The main difference between our experiment and the others reported in the literature was that here, the negations and the counterfactuals were embedded into antecedent-consequent paragraphs, rather than in simple sentences. It might be possible that readers of these materials need to flesh out the embodied representations to check whether the second clause is an acceptable consequence of the previous antecedent event.

Embodied or Conceptual Representations?

Most of the significant activations in this study were found in brain regions, which are responsible for higher-order processes such as planning actions, selecting actions, controlling and inhibiting actions, and the like. Recently, some researchers have suggested that these higher-order areas are in charge of processing conceptual or strategic aspects of actions rather than embodied representations of actions (Bedny & Caramazza, 2011; Mahon & Caramazza, 2008; Postle et al., 2008; Kable, Lease-Spellmeyer, & Chatterjee, 2002). For instance, the inferior parietal and supramarginal regions might be responsible for processing amodal spatiotemporal patterns of motion, rather than specific motor planning (Bedny & Caramazza, 2011), and the pre-SMA activations obtained in some action language studies could be associated with producing a general instruction cue for action rather than selecting and controlling motor programs (Postle et al., 2008).

However, the claim that higher-order motor regions are purely conceptual and amodal is controversial. The brain motor system is a complex network with different levels of hierarchically organized processing, including not only the primary motor and premotor areas, but also prefrontal, parietal, temporal, and SMAs with a rich neural and functional connectivity among them (Grezes & Decety, 2001; Jeannerod, 1997). Action performance consists of executing goal-directed motor programs that are planned and controlled by higher-order motor regions. Action understanding, however, could mainly rely on the activity in these higher-order regions, without the need to engage the primary motor areas that, in fact, could be inhibited (Mukamel et al., 2010). This should not be a surprise, because understanding actions, according to the embodiment approach, entails a neural simulation process rather than an open motor performance. Moreover, the manual actions presented in the videos or described by the sentences in this study were rather heterogeneous. For instance, the language task included actions such as “turning on the shower tap,” “hanging up the coat,” or “filling a glass with water,” which differ considerably in their motor programs. Therefore, we might only expect higher-order motor activations to be shared by these actions, whereas any activation in the primary motor and premotor areas associated with specific actions (if they exist), would not accumulate sufficiently overlapping BOLD signals to reach statistical significance. In a sense, embodied semantics based on these high-order regions is “abstract,” because it involves coarse-grained simulations of actions rather than fine-grained motor programs. For instance, understanding a phrase like “filling a glass with water” could involve a gross representation of a forward movement of the arm and the grasping hand, without getting into details of other motion parameters like the angle, speed, or distance to the object. This level of abstraction, however, is compatible with a partial use of the circuitry involved in performing and observing actions.

Conclusions

In summary, sensory motor networks partially overlapping those involved in action observation were activated during the comprehension of action language, suggesting that the two processes share neuroanatomical motor regions. Specifically, there was a motor-related network including clusters of activation in the supplementary motor cortex extending to the precentral cortex and in the parietal and temporal regions. Furthermore, another cluster of activation corresponded to the temporo-occipital cortex, a region responsible for visual analysis of biological motion. These activations were not substantially modulated by the linguistic structure, in spite of the abstract character of negations and counterfactuals, indicating that embodied semantics also underlies these complex linguistic construals.

Acknowledgments

This research was funded by the Spanish Ministerio de Economía y Competitividad (grants SEJ2007-66916 and SEJ2011-28679), the Canary Agency for Research, Innovation, and Information Society (NEUROCOG Project), and the European Regional Development Funds to Manuel de Vega. The fMRI data were obtained in the Magnetic Resonance Service for Biomedical Research at the University of La Laguna. We would like to thank José Miguel Díaz and Elena Gámez for their help in the elaboration of the materials and Yusniel Santos and Jorge Iglesias for their valuable support in the data preprocessing and analyses.

Reprint requests should be sent to Manuel de Vega, Facultad de Psicología, Universidad de La Laguna, Campus de Guajara, La Laguna, Tenerife, Spain, 38205, or via e-mail: mdevega@ull.es.

REFERENCES

REFERENCES
Aziz-Zadeh
,
L.
,
Wilson
,
S. M.
,
Rizzolatti
,
G.
, &
Iacoboni
,
M.
(
2006
).
Congruent embodied representations for visually presented actions and linguistic phrases describing actions.
Current Biology
,
16
,
1
6
.
Barsalou
,
L.
,
Santos
,
A.
,
Simmons
,
W. K.
, &
Wilson
,
C. D.
(
2008
).
Language and simulation in conceptual processing.
In M. de Vega, A. Glenberg, & A. Graesser (Eds.)
,
Symbols, and embodiment. Debates on meaning and cognition
(pp.
245
284
).
New York
:
Oxford University Press
.
Bedny
,
M.
, &
Caramazza
,
A.
(
2011
).
Perception, action, and word meanings in the human brain: The case from action verbs.
Annals of the New York Academic of Science
,
1224
,
81
95
.
Binkofski
,
F.
, &
Buccino
,
G.
(
2006
).
The role of ventral premotor cortex inaction execution and action understanding.
Journal of Physiology (Paris)
,
99
,
396
405
.
Buccino
,
G.
,
Riggio
,
G.
,
Melli
,
F.
,
Binkofski
,
V.
,
Gallese
,
G.
, &
Rizzolatti
,
G.
(
2005
).
Listening to action sentences modulates the activity of the motor system: A combined TMS and behavioral study.
Cognitive Brain Research
,
24
,
355
363
.
Cacciari
,
C.
,
Bolognini
,
N.
,
Senna
,
I.
,
Pellicciari
,
M. C.
,
Miniussi
,
C.
, &
Papagno
,
C.
(
2011
).
Literal, fictive and metaphorical motion sentences preserve the motion component of the verb: A TMS study.
Brain and Language
,
119
,
149
157
.
Calvo-Merino
,
B.
,
Glaser
,
D. E.
,
Grèzes
,
J.
,
Passingham
,
R. E.
, &
Haggard
,
P.
(
2005
).
Action observation and acquired motor skills: An fMRI study with expert dancers.
Cerebral Cortex
,
15
,
1243
1249
.
Candidi
,
M.
,
Leone-Fernandez
,
B.
,
Barber
,
H.
,
Carreiras
,
M.
, &
Aglioti
,
S. M.
(
2010
).
Hands on the future: Facilitation of cortico-spinal hand-representation when reading the future tense of hand-related action verbs.
European Journal of Neuroscience
,
32
,
677
683
.
Carpenter
,
P. A.
,
Just
,
M. A.
,
Keller
,
T. A.
,
Eddy
,
W. F.
, &
Thulborn
,
K. R.
(
1999
).
Time course of fMRI-activation in language and spatial networks during sentence comprehension.
Neuroimage
,
10
,
216
224
.
Collignon
,
A.
,
Maes
,
F.
,
Delaere
,
D.
,
Vandermeulen
,
D.
,
Suetens
,
P.
, &
Marchal
,
G.
(
1995
).
Automated multi-modality image registration based on information theory.
In Y. Bizais, C. Barillot, & R. Di Paola (Eds.)
,
Proceedings of Information Processing in Medical Imaging Conference
(p.
263
).
Dordrecht, The Netherlands
:
Kluwer Academic Publishers
.
Culham
,
J. C.
, &
Valyear
,
T. K.
(
2006
).
Human parietal cortex in action.
Current Opinion in Neurobiology
,
16
,
205
212
.
de Vega
,
M.
, &
Urrutia
,
M.
(
2011
).
Counterfactual sentences activate embodied meaning: An action-sentence compatibility effect study.
Journal of Cognitive Psychology
,
32
,
962
973
.
de Vega
,
M.
,
Urrutia
,
M.
, &
Riffo
,
B.
(
2007
).
Canceling updating in the comprehension of counterfactuals embedded in narratives.
Memory and Cognition
,
35
,
1410
1431
.
Decety
,
J.
,
Grezes
,
J.
,
Costes
,
N.
,
Perani
,
D.
,
Jeannerod
,
M.
,
Procyk
,
E.
,
et al
(
1997
).
Brain activity during observation of actions: Influence of action content and subject's strategy.
Brain
,
120
,
1763
1777
.
Deen
,
B.
, &
McCarthy
,
G.
(
2010
).
Reading about the actions of others: Biological motion imagery and action congruency influence brain activity.
Neuropsychologia
,
48
,
1607
1615
.
Desai
,
R. H.
,
Binder
,
J. R.
,
Conant
,
L. L.
, &
Seidenberg
,
M. S.
(
2010
).
Activation of sensory-motor areas in sentence comprehension.
Cerebral Cortex
,
20
,
468
478
.
Downing
,
P. E.
,
Peelen
,
M. V.
,
Wiggett
,
A. J.
, &
Tew
,
B. D.
(
2006
).
The role of the extrastriate body area in action perception.
Social Neuroscience
,
1
,
52
62
.
Ferguson
,
H.
, &
Sanford
,
T.
(
2008
).
Anomalies in real and counterfactual worlds: An eye-movement investigation.
Journal of Memory and Language
,
58
,
609
626
.
Gerloff
,
C.
,
Corwell
,
B.
,
Chen
,
R.
,
Hallett
,
M.
, &
Cohen
,
L. G.
(
1997
).
Stimulation over the human supplementary motor area interferes with the organization of future elements in complex motor sequences.
Brain
,
120
,
1587
1602
.
Glenberg
,
A. M.
, &
Kaschak
,
M. P.
(
2002
).
Grounding language in action.
Psychonomic Bulletin & Review
,
9
,
558
565
.
Glenberg
,
A. M.
,
Sato
,
M.
,
Cattaneo
,
L.
,
Palumbo
,
D.
, &
Buccino
,
G.
(
2008
).
Processing abstract language modulates motor system activity.
The Quarterly Journal of Experimental Psychology
,
61
,
905
919
.
Gómez-Beldarrain
,
M.
,
García-Monco
,
J. C.
,
Astigarraga
,
E.
,
González
,
A.
, &
Grafman
,
J.
(
2005
).
Only spontaneous counterfactual thinking is impaired in patients with prefrontal cortex lesions.
Cognitive Brain Research
,
24
,
723
726
.
Grezes
,
J.
, &
Decety
,
J.
(
2001
).
Functional anatomy of execution, mental simulation, observation, and verb generation of actions: A meta-analysis.
Human Brain Mapping
,
12
,
1
19
.
Hauk
,
O.
,
Johnsrude
,
I.
, &
Pulvermüller
,
F.
(
2004
).
Somatotopic representation of action words in human motor and pre-motor cortex.
Neuron
,
41
,
301
307
.
Jeannerod
,
M.
(
1997
).
The cognitive neuroscience of action.
New York
:
Blackwell
.
Johnson-Frey
,
S. H.
,
Newman-Norlund
,
R.
, &
Grafton
,
S. T.
(
2005
).
A distributed left hemisphere network active during planning of everyday tool use skills.
Cerebral Cortex
,
15
,
681
695
.
Kable
,
J. W.
,
Lease-Spellmeyer
,
J.
, &
Chatterjee
,
A.
(
2002
).
Neural substrates of action event knowledge.
Journal of Cognitive Neuroscience
,
14
,
795
805
.
Kanwisher
,
N.
, &
Wojciulik
,
E.
(
2000
).
Visual attention: Insights from brain imaging.
Nature Neuroscience
,
1
,
91
100
.
Kaup
,
B.
,
Yaxley
,
R. H.
,
Madden
,
C. J.
,
Zwaan
,
R. A.
, &
Lüdtke
,
J.
(
2007
).
Experiential simulations of negated text information.
Quarterly Journal of Experimental Psychology
,
60
,
976
990
.
Kemmerer
,
D.
, &
Gonzalez-Castillo
,
J.
(
2010
).
The two-level theory of verb meaning: An approach to integrating the semantics of action with the mirror neuron system.
Brain and Language
,
112
,
54
76
.
Keysers
,
C.
, &
Gazzola
,
V.
(
2010
).
Social neuroscience: Mirror neurons recorded in humans.
Current Biology
,
20
,
27
.
Lee
,
K.-M.
,
Chang
,
K.-H.
, &
Roh
,
J.-K.
(
1999
).
Subregions within the supplementary motor area activated at different stages of movement preparation and execution.
Neuroimage
,
9
,
117
123
.
Lieberman
,
M. D.
, &
Cunningham
,
W. A.
(
2009
).
Type I and type II error concerns in fMRI research: Re-balancing the scale.
Social Cognitive and Affective Neuroscience
,
4
,
423
428
.
Liuzza
,
M. T.
,
Candidi
,
M.
, &
Aglioti
,
S. M.
(
2011
).
Do not resonate with actions: Sentence polarity modulates cortico-spinal excitability during action-related sentence reading.
Plos One
,
6
,
e16855
.
Mahon
,
B. Z.
, &
Caramazza
,
A.
(
2008
).
A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content.
Journal of Physiology, Paris
,
102
,
59
70
.
McNamara
,
P.
,
Durso
,
R.
,
Brown
,
A.
, &
Lynch
,
A.
(
2003
).
Counterfactual cognitive deficit in persons with Parkinson's disease.
Journal of Neurology, Neurosurgery, and Psychiatry
,
74
,
1065
1070
.
Meister
,
I. G.
, &
Iacoboni
,
M.
(
2007
).
No language-specific activation during linguistic processing of observed actions.
PLoS One
,
2
,
e891
.
Michels
,
L.
,
Lappe
,
M.
, &
Vaina
,
L. M.
(
2005
).
Visual areas involved in the perception of human movement from dynamic form analysis.
NeuroReport
,
16
,
1037
1041
.
Moody
,
C. L.
, &
Gennari
,
S. P.
(
2010
).
Effects of implied physical effort in sensory-motor and pre-frontal cortex during language comprehension.
Neuroimage
,
49
,
782
793
.
Mostofsky
,
S. H.
, &
Simmonds
,
D. J.
(
2008
).
Response inhibition and response selection: Two sides of the same coin.
Journal of Cognitive Neuroscience
,
20
,
751
761
.
Mukamel
,
R.
,
Ekstrom
,
A. D.
,
Kaplan
,
J.
,
Iacoboni
,
M.
, &
Fried
,
I.
(
2010
).
Single-neuron responses in humans during execution and observation of actions.
Current Biology
,
20
,
1
7
.
Nachev
,
P.
,
Wydell
,
H.
,
O'Neill
,
K.
,
Husain
,
M.
, &
Kennard
,
C.
(
2007
).
The role of the pre-supplementary motor area in the control of action.
Neuroimage
,
36
,
155
163
.
Noppeney
,
U.
,
Josephs
,
O.
,
Kiebel
,
S.
,
Friston
,
K. J.
, &
Price
,
C. J.
(
2005
).
Action selectivity in parietal and temporal cortex.
Cognitive Brain Research
,
25
,
641
649
.
Postle
,
N.
,
McMahon
,
K. L.
,
Ashton
,
R.
,
Meredith
,
M.
, &
de Zubicaray
,
G. I.
(
2008
).
Action word meaning representations in cytoarchitectonically defined primary and premotor cortices.
Neuroimage
,
43
,
634
644
.
Raposo
,
R.
,
Moss
,
H. E.
,
Stamatakis
,
E. A.
, &
Tyler
,
L. K.
(
2009
).
Modulation of motor and premotor cortices by actions, action words and action sentences.
Neuropsychologia
,
47
,
388
396
.
Rizzolatti
,
G.
, &
Craighero
,
L.
(
2004
).
The mirror-neuron system.
Annual Review in Neuroscience
,
27
,
169
192
.
Rueschemeyer
,
S. A.
,
Pfeiffer
,
C.
, &
Bekkering
,
H.
(
2010
).
Body schematics: On the role of the body schema in embodied lexical-semantic representations.
Neuropsychologia
,
48
,
774
781
.
Rueschemeyer
,
S. A.
,
van Rooij
,
D.
,
Lindemann
,
O.
,
Willems
,
R. M.
, &
Bekkering
,
H.
(
2009
).
The function of words: Distinct neural correlates for words denoting differently manipulable objects.
Journal of Cognitive Neuroscience
,
22
,
1844
1851
.
Rumiati
,
R. I.
,
Papeo
,
L.
, &
Corradi-Dell'Acqua
,
C.
(
2010
).
Higher-level motor processes.
Annals of the New York Academy of Sciences
,
1191
,
219
241
.
Saygin
,
A. P.
,
McCullough
,
S.
,
Alac
,
M.
, &
Emmorey
,
K.
(
2009
).
Modulation of BOLD response in motion-sensitive lateral temporal cortex by real and fictive motion sentences.
Journal of Cognitive Neuroscience
,
22
,
2480
2490
.
Schütz-Bosbach
,
S.
,
Avenanti
,
A.
,
Aglioti
,
S. M.
, &
Haggard
,
P.
(
2008
).
Don't do it! Cortical inhibition and self-attribution during action observation.
Journal of Cognitive Neuroscience
,
21
,
1215
1227
.
Shetreet
,
E.
,
Palti
,
D.
,
Friedmann
,
N.
, &
Hadar
,
U.
(
2007
).
Cortical representation of verb processing in sentence comprehension: Number of complements, subcategorization, and thematic frames.
Cortex
,
17
,
1958
1969
.
Shmuelof
,
L.
, &
Zohary
,
E.
(
2006
).
A mirror representation of others' actions in the human anterior parietal cortex.
The Journal of Neuroscience
,
20
,
9736
9742
.
Stewart
,
A. J.
,
Haigh
,
M.
, &
Kidd
,
E.
(
2009
).
An investigation into the online processing of counterfactual and indicative conditionals.
Quarterly Journal of Experimental Psychology
,
25
,
1
13
.
Tettamanti
,
M.
,
Manenti
,
R.
,
Della Rosa
,
P.
,
Falini
,
A.
,
Perani
,
D.
,
Cappa
,
S.
,
et al
(
2008
).
Negation in the brain: Modulating action representations.
Neuroimage
,
43
,
358
367
.
Thompson
,
W. L.
,
Slotnick
,
S. D.
,
Burrage
,
M. S.
, &
Kosslyn
,
S. M.
(
2009
).
Two forms of spatial imagery neuroimaging evidence.
Psychological Science
,
20
,
1246
1253
.
Tomasino
,
B.
,
Fink
,
G. R.
,
Sparing
,
R.
,
Dafotakis
,
M.
, &
Weiss
,
P. H.
(
2008
).
Action verbs and the primary motor cortex: A comparative TMS study of silent reading, frequency judgments, and motor imagery.
Neuropsychologia
,
46
,
1915
1926
.
Tomasino
,
B.
,
Weiss
,
P. H.
, &
Fink
,
P. R.
(
2010
).
To move or not to move: Imperatives modulate action-related motor verb processing in the motor system.
Neuroscience
,
169
,
246
258
.
Tremblay
,
P.
, &
Small
,
S. L.
(
2011
).
From language comprehension to action understanding and back again.
Cerebral Cortex
,
21
,
1166
1177
.
Urgesi
,
C.
,
Candidi
,
M.
,
Ionta
,
S.
, &
Aglioti
,
S. M.
(
2007
).
Representation of body identity and body actions in extrastriate body area and ventral premotor cortex.
Nature Neuroscience
,
10
,
30
31
.
Urrutia
,
M.
,
Gennari
,
S.
, &
de Vega
,
M.
(
2012
).
Counterfactuals in action: An fMRI study of counterfactual sentences describing physical effort.
Neuropsychologia
,
50
,
3663
3672
.
Wallentin
,
M.
,
Lund
,
T. E.
,
Ostergaard
,
S.
,
Ostergaard
,
L.
, &
Roepstorff
,
A.
(
2005
).
Motion verb sentences activate left posterior middle temporal cortex despite static context.
NeuroReport
,
16
,
649
652
.
Wallentin
,
M.
,
Nielsen
,
A. H.
,
Vuust
,
P.
,
Dohn
,
A.
,
Roepstorff
,
A.
, &
Lund
,
T. E.
(
2011
).
BOLD response to motion verbs in left posterior middle temporal gyrus during story comprehension.
Brain and Language
,
119
,
221
225
.
Wu
,
D. H.
,
Waller
,
S.
, &
Chatterjee
,
A.
(
2007
).
The functional neuroanatomy of thematic role and locative relational knowledge.
Journal of Cognitive Neuroscience
,
19
,
1542
1555
.
Zwaan
,
R. A.
, &
Taylor
,
L. J.
(
2006
).
Seeing, acting, understanding: Motor resonance in language comprehension.
Journal of Experimental Psychology: General
,
135
,
1
11
.