Abstract

Human communicative competence is based on the ability to process a specific class of mental states, namely, communicative intention. The present fMRI study aims to analyze whether intention processing in communication is affected by the expressive means through which a communicative intention is conveyed, that is, the linguistic or extralinguistic gestural means. Combined factorial and conjunction analyses were used to test two sets of predictions: first, that a common brain network is recruited for the comprehension of communicative intentions independently of the modality through which they are conveyed; second, that additional brain areas are specifically recruited depending on the communicative modality used, reflecting distinct sensorimotor gateways. Our results clearly showed that a common neural network is engaged in communicative intention processing independently of the modality used. This network includes the precuneus, the left and right posterior STS and TPJ, and the medial pFC. Additional brain areas outside those involved in intention processing are specifically engaged by the particular communicative modality, that is, a peri-sylvian language network for the linguistic modality and a sensorimotor network for the extralinguistic modality. Thus, common representation of communicative intention may be accessed by modality-specific gateways, which are distinct for linguistic versus extralinguistic expressive means. Taken together, our results indicate that the information acquired by different communicative modalities is equivalent from a mental processing standpoint, in particular, at the point at which the actor's communicative intention has to be reconstructed.

INTRODUCTION

Human communication is a cooperative activity among social agents who together intentionally construct the meaning of their interaction. People use communication as a social action to affect and modify others' mental states using both linguistic and gestural modalities to this end. The aim of the present work was to analyze the neural underpinnings of mental processes on which human communicative competence is based.

According to the cognitive pragmatics approach (Bara, 2010) and the mentalist view of communication (Airenti, 2010; Adenzato & Bucciarelli, 2008; Tirassa, 1999; Grice, 1975), human communicative competence is based on the ability to recognize and to process a specific kind of mental state, that is, communicative intention. The philosophy of language and cognitive pragmatics defines communicative intention as the intention to communicate a meaning to someone else, plus the intention that the former intention should be recognized by the addressee (Bara, 2010; Airenti, Bara, & Colombetti, 1993; Grice, 1975). Take, for instance, the example of an agent's pointing gesture to a bottle near a partner or the agent's sentence “Pass me the bottle.” The partner has to comprehend not only the semantic content of the gesture or of the sentence but also what the agent is trying to achieve by expressing that content, namely, that the agent wants the partner to hand her the bottle (Figure 1A and B).

Figure 1. 

Examples of the four experimental conditions of the story completion task. Two intentional conditions, that is, LCInt (A) and XLCInt (B), and two nonintentional conditions, that is, LPhC (C) and XLPhC (D). Each story consisted of a development phase, followed by a response phase.

Figure 1. 

Examples of the four experimental conditions of the story completion task. Two intentional conditions, that is, LCInt (A) and XLCInt (B), and two nonintentional conditions, that is, LPhC (C) and XLPhC (D). Each story consisted of a development phase, followed by a response phase.

Communicative intention represents the primary mental state to be dealt with in explaining of other people's communicative actions. In contrast to other mental states (e.g., beliefs or desires) and other kinds of intentions (e.g., individual intentions) that can be instantiated by an actor in isolation, that is, without the need to interact with other agents, a communicative intention reflects three main requirements: (a) it always occurs in the context of a social interaction with a partner; (b) it is overt, in the sense that it is intended to be recognized as such by the partner; and (c) its fulfillment consists precisely in the fact that it is recognized by the partner.

The process whereby this form of intentions can be understood draws a connection between human communication and a more general type of social competence, namely, the theory of mind (ToM) ability (Baron-Cohen, 1995). ToM is defined as the ability to understand and to predict other people's behavior by attributing independent mental states to them. Neuroimaging studies have shown the existence of a distributed neural system underlying ToM that includes, at least, the complex formed by the posterior STS (pSTS) and by the adjacent TPJ areas, the precuneus, and the medial pFC (MPFC) (Carrington & Bailey, 2009; Van Overwalle & Baetens, 2009).

As highlighted by Frith and Frith (2006), although in recent years the neural correlates of ToM for individual mental states such as beliefs and desires have become an active area of research (Amodio & Frith, 2006; Becchio, Adenzato, & Bara, 2006; Brune & Brune-Cohrs, 2006; Saxe, Carey, & Kanwisher, 2004), relatively few studies have specifically investigated intention processing in communication. A study by Kampe, Frith, and Frith (2003) focused on the early stages of communicative interactions and examined the effects of the comprehension of two signals that convey an intention to communicate, namely, looking directly at someone and calling someone's name. An analysis restricted to ToM ROIs showed activations of the MPFC and of the left temporal pole for both types of communicative signals, whereas only hearing one's own name also activated the right temporal pole. It must be noted that the intention to communicate investigated by this study differs from communicative intention as defined earlier, in that here a communicative interaction between agents is only put forward by the actor but has not actually taken place. A further study by Wang, Lee, Sigman, and Dapretto (2006) used a high-level pragmatic aspect of communication, such as irony comprehension, to examine the neural circuitry involved in inferring the intention beyond the literal meaning of an ironic remark. The adults and the children participating in the study viewed cartoon drawings while listening to a short story ending with a potentially ironic remark. The authors found that in comprehending ironies, adults and children engaged similar overall brain networks, including the frontal, temporal, and occipital cortices bilaterally. Furthermore, children recruited left inferior frontal regions more strongly than adults and showed reliable activity in the MPFC, whereas adults did not. In contrast, adults activated posterior occipito-temporal regions more strongly than children.

In previous studies (Walter et al., 2009; Ciaramidaro et al., 2007; Walter et al., 2004), we proposed a model consisting of a dynamic intention processing network (IPN), in which the right and left pSTS, the right and left TPJ, the precuneus, and the MPFC are progressively recruited depending on the nature of the intention processed, from individual intention to communicative intention. In two separate fMRI experiments, using a story completion task presented in comic strip form, we demonstrated that activation of the whole IPN only emerged during communicative intention processing, when two characters were depicted in a communicative interaction mediated by extralinguistic gestures (e.g., showing a map to ask for directions). In contrast, the activation of the network was limited to the right pSTS/TPJ and precuneus when a character was shown acting with an individual intention (e.g., changing a broken light bulb to read a book) or when two characters were shown acting each with their own individual intention (e.g., one character putting some bait on a fishing hook and the other character lying on a beach chair to sunbathe).

An important limitation of our previous IPN model is that it was based on the extralinguistic gestural modality alone. Starting from our previous data, a central issue remains open, that is, whether the IPN is specifically recruited by gestural communication or is independent of the communicative modality used.

A Priori Experimental Hypotheses

The present study aims to analyze whether intention processing is affected by the modality, linguistic or gestural, through which a communicative intention is conveyed. Following a cognitive pragmatics approach (Bara, 2010), we assumed that a common communicative competence—independent of the linguistic or extralinguistic gestural means—is instantiated at the level in which a communicative intention is inferred and comprehended within a specific social context, that is, at the pragmatic level. Consequently, we hypothesized that there is no difference in brain activity between the recognition of a communicative intention issued by an observed linguistic behavior and the recognition of a communicative intention issued by an observed gestural behavior.

The present study tested two sets of predictions by using combined factorial and conjunction analyses. First, that at the pragmatic level, a common brain network, that is, the IPN, is recruited for the comprehension of communicative intentions independently of the modality through which they are conveyed, that is, by language or gesture. Second, that additional brain areas other than those involved in the IPN at the pragmatic level are specifically recruited depending on the modality used. In particular, we expect the recruitment of a peri-sylvian language network for the linguistic modality and of a sensorimotor network for the extralinguistic gestural modality. These two predictions were addressed by means of a 2 × 2 factorial design, with factors Intention (intentional vs. nonintentional) and Modality (linguistic vs. extralinguistic), where the most crucial question of whether the communicative intent is processed differently in the linguistic and gestural modalities is reflected in the Intention × Modality interaction.

METHODS

Participants

Twenty-four healthy Italian native speakers (13 women; age range = 19–37 years, mean = 24.45 years, SD = 5.71 years) with no history of neurological or psychiatric diseases participated in the imaging study. The Ethics Committee of the San Raffaele Scientific Institute approved the study. All participants gave their written informed consent before scanning. All the participants were right-handed. Handedness (cutoff ≥ 20/24, score range = 20–24, mean = 23.08, SD = 1.06) was assessed with the Edinburgh Handedness Inventory (Oldfield, 1971).

Stimuli

The experiment consisted of an event-related fMRI study with a 2 × 2 factorial design. The factors manipulated were Intention (two levels: intentional vs. nonintentional) and Modality (two levels: linguistic vs. extralinguistic). This factorial design resulted in the following four experimental conditions:

  • (1) 

    Linguistic communicative intention (LCInt) conveyed by linguistic means (i.e., sentences presented in written form).

  • (2) 

    Extralinguistic communicative intention (XLCInt) conveyed by gestural means (i.e., communicative gestures presented in pictorial form).

  • (3) 

    Linguistic physical causality (LPhC) established among objects, where the causal link is described by a sentence (presented in written form).

  • (4) 

    Extralinguistic physical causality (XLPhC) established among objects, where the causal link is depicted in the scene.

Experimental Conditions 1 and 2 (LCInt and XLCInt) represented the intentional levels, whereas Conditions 3 and 4 (LPhC and XLPhC) represented the nonintentional levels of the Intention factor.

We used a story completion task that was presented in comic-strip form. During scanning, participants were asked to demonstrate their comprehension of the stories by choosing the most appropriate story endings. The experimental protocol comprised 22 comic strip stories for each of the four experimental conditions (for a total of 88 stories), which depicted the behaviors of the characters and the movements of the objects (Figure 1). The stories were administered in randomized order. Four different training stories, one for each condition, were also administered. Further, examples of comic strips for each condition are available at http://www.psych.unito.it/csc/pers/enrici/pdf/com_int_protocol.pdf.

Each story consisted of three consecutive pictures (development phase), followed by a choice between two concluding pictures (response phase).

In the development phase, the first and the second pictures established a story setting and introduced the characters or the objects involved, whereas the third picture represented the communicative intention or physical causality events. It is important to note that in all of the stories, the causation (intentional or nonintentional) could not be univocally inferred before the appearance of the third picture. This picture determined the linguistic versus extralinguistic Modality factor. For instance, in the third picture of an intentional condition story, an actor says “Please sit down” (LCInt) or indicates a chair (XLCInt) to a partner located in front of him.

In the response phase, the correct picture represented a probable and congruent effect resulting from the development phase, whereas the incorrect picture represented an improbable or incongruent effect. For instance, in the intentional conditions, the correct picture represented the partner's probable and congruent response to the communicative action proposed by the actor (e.g., the partner sits on the chair) and required the correct attribution of the actor's communicative intention. The incorrect picture represented an incongruent response to the communicative action (e.g., the partner leaves the room).

In all the experimental conditions, each story was repeated twice, once for the linguistic and once for the extralinguistic modality version. Special attention was paid to the fact that only the third picture of each story—the communicative modality—differed between the two versions, whereas the first and the second pictures—the context and the communicative intention associated to it in the intentional conditions—remained the same. Each participant underwent four functional scanning sessions, and the second presentation of every story was circumscribed only to the Sessions 3 and 4. Because a possible limitation of our experimental protocol could have been a potential repetition effects on brain activations, we arranged the experimental setting taking into account four dimensions: the number of repetitions, the time between repetitions, the number of intervening stimuli, and the kind of repetition (Grill-Spector, Henson, & Martin, 2006). We had a single stimulus repetition (all stimuli were presented only twice), a mean of approximately 16 min between the first and the second stimulus presentation, a mean of 44 intervening stimuli of different sorts between the two presentations (with also three pauses among the scanning sessions of at least two min each), and a nonidentical (partial) repetition (i.e., the third picture of each story differed between the two modality versions), respectively. Furthermore, to control for any potential repetition spurious effects that could engender a misinterpretation of our activation patterns (such as a verbal bias for processing a story in the gestural modality when the same story had already been observed in the linguistic modality), we firstly analyzed neuroimaging data only of the first two sessions (for a total of 44 stories), which did not include any repetitions of the stimuli. No important differences were found in brain activation between the analysis including only the first two sessions and the analysis here reported including all four sessions.

It is important to underline that, as opposed to previous related works, the development and the response phases were combined to form a communicative exchange in its entirety with a communicative action performed by the actor and the concurrent reaction to that action by the partner. Furthermore, the depicted behaviors were never unusual or pretended, and we did not explicitly instruct our participants to pay attention to the actors' intentions.

Regarding the linguistic modality, we used simple and direct linguistic communication acts (Bara, 2010; Austin, 1962). Sentences used in the stimuli were controlled for number of words and occurrence of content words, that is, nouns and verbs in Italian contemporary written texts (CoLFIS database; Laudanna, Thornton, Brown, Burani, & Marconi, 1995). Number of words and standard deviation for the linguistic experimental conditions were as follows: LCInt, mean = 4.22, SD = 0.85; LPhC, mean = 3.96, SD = 0.71 (t = 1.100, p = .283). To depict communicative intentions in the extralinguistic modality, we used conventional ideational gestures, in particular emblem gestures, that is, meaningful gestures that have a meaning in the absence of speech (Kendon, 2004; McNeill, 1992).

The experimental protocol was initially administered without any fMRI scanning in a pilot study (n = 26) to ensure an equal level of complexity (in terms of accuracy and RTs) across the four experimental conditions. Response accuracy (number of correct answers, maximum score = 22) and RTs in milliseconds (for correct answers only) as well as standard deviations for the respective conditions were as follows: LCInt, 21.12 (SD = 0.86) and 1394 msec (SD = 248 msec); XLCInt, 21 (SD = 1.17) and 1421 msec (SD = 301 msec); LPhC, 21.31 (SD = 1.12) and 1449 msec (SD = 265 msec); XLPhC, 21.31 (SD = 1.16) and 1459 msec (SD = 277 msec). RTs more than 2.5 standard deviations from each participant's condition means were excluded from the statistical analysis. There was no significant condition effect on response accuracy (repeated measures ANOVA), F(3, 75) = 1.27, p = .292, and on RTs (repeated measures ANOVA), F(3, 75) = 2.53, p = .064.

Stimuli in the fMRI setting were presented in a randomized order by means of Presentation 11.0 (Neurobehavioral Systems, Albany, CA) and viewed via a back-projection screen located in front of the scanner and a mirror placed on the head coil. The presentation was also used to record behavioral responses that were collected via a fiber-optic response box.

MRI Data Acquisition

MRI scans were acquired on a 3-T Intera Philips body scanner (Philips Medical Systems, Best, NL) using an eight-channel sense head coil (sense reduction factor = 2). Whole-brain functional images were obtained with a T2*-weighted gradient-echo, echo-planar sequence, using BOLD contrast. Each functional image comprised 30 contiguous axial slices (4 mm thick), acquired in interleaved mode, and with a repetition time of 2000 msec (echo time = 30 msec, field of view = 240 × 240 mm, matrix size = 128 × 128). Each participant underwent four functional scanning sessions. The duration of each session was 155 scans, preceded by five dummy scans that were discarded before data analysis. A field map to be used for the unwarping of echo-planar image spatial distortions was acquired for each subject before functional scanning.

We also acquired a high-resolution whole-brain structural T1-weighted scan (resolution = 1 × 1 × 1 mm) of each participant. The normalized structural images of all participants were then averaged in one single image for anatomical localization and visualization of brain activations.

Data Analysis

Statistical parametric mapping (SPM5; Wellcome Department of Imaging Neuroscience, London, UK) was used for image realignment and unwarping, unified segmentation and normalization to the Montreal Neurological Institute standard space, smoothing by an 8-mm FWHM Gaussian kernel, and general linear model statistical analysis. We adopted a two-stage random-effects approach to ensure generalizability of the results at the population level (Penny & Holmes, 2003).

First-level General Linear Models

At the first stage, the time series of each participant was high-pass filtered at 128 sec and prewhitened by means of an autoregressive model AR(1). Evoked responses for all experimental conditions were modeled as a set of hemodynamic response functions, including the canonical response function and its first and second derivative functions. The first-level individual design matrices included the data of all four scanning sessions. For each session, we modeled the four experimental conditions (LCInt, XLCInt, LPhC, and XLPhC) with the onset of the hemodynamic response functions time locked to the presentation of the first picture of each story and an epoch duration covering the entire development phase and the response phase until button press. A set of Student's t test contrasts were defined for use at the second statistical level: These consisted of one contrast per experimental condition per hemodynamic response function (canonical, first derivative, and second derivative) for each participant.

Second-level General Linear Models

At the second stage of analysis, the contrast images obtained at the single-subject level were used to compute a full-factorial ANOVA assessing their significance at the group level (n = 24 participants). The two within-subject factors Intention and Modality (equal variance, levels not independent) were entered in this ANOVA, reflecting the experiment's factorial design, plus an additional within-subject “Basis set” factor (unequal variance, levels not independent) with three levels (canonical hemodynamic response function, first derivative, and second derivative). The contrasts assessed at the second level included the following: (a) main effect of Intention: [(LCInt + XLCInt) − (LPhC + XLPhC)]; (b) main effect of Modality: [(LCInt + LPhC) − (XLCInt + XLPhC)]; and (c) Intention × Modality interaction: [(LCInt − XLCInt) − (LPhC − XLPhC)]. We also computed the four simple main effects: [LCInt − LPhC]; [XLCInt − XLPhC]; [LCInt − XLCInt]; and [LPhC − XLPhC]. In addition, we also computed null hypothesis conjunction effects (Nichols, Brett, Andersson, Wager, & Poline, 2005) between (a) the two simple main effects of Intention: [(LCInt − LPhC) conj. (XLCInt − XLPhC)]; and (b) the two simple main effects of Modality: [(LCInt − XLCInt) conj. (LPhC − XLPhC)].

We assessed all these group-level effects with F contrasts, allowing to make combined inferences over the canonical hemodynamic response function and its first and second derivates. All reported effects relate to voxel-level statistics (p < .05, family-wise error [FWE] type correction) on the basis of the Gaussian random field theory, see Worsley et al., 1996). For all the contrasts listed earlier, we inspected the corresponding beta estimates to investigate the directionality of the observed differences in activation and to exclude spurious effects driven by a difference in the hemodynamic response derivative functions.

Commonalities between the Present Study and the Study of Walter et al. (2004) (Small Volume Correction Procedure)

To directly compare the results of the present study with the results of our previous study (Walter et al., 2004), we used a small volume correction procedure (p < .05, FWE corrected) that restricted the brain voxel-wise analysis to a functional mask, including the voxels of significant activation for communicative intention in the previous contrast (XLCInt > XLPhC in the study by Walter et al., thresholded at p < .05, FWE corrected).

RESULTS

Behavioral Results

Response accuracy (number of correct answers, maximum score = 22) and RTs in milliseconds (for correct answers only) as well as standard deviations for the respective conditions were as follows: LCInt, 21.50 (SD = 0.83) and 1458 msec (SD = 248 msec); XLCInt, 21.20 (SD = 0.77) and 1521 msec (SD = 301 msec); LPhC, 21.54 (SD = 0.65) and 1460 msec (SD = 265 msec); and XLPhC, 21.62 (SD = 0.57) and 1461 msec (SD = 277 msec). RTs more than 2.5 standard deviations from each participant's condition means were excluded from the statistical analysis (Ratcliff, 1993). There was no significant condition effect on response accuracy (repeated measures ANOVA), F(3, 69) = 1.99, p = .124, and on RTs (repeated measures ANOVA), F(3, 69) = 2.72, p = .051. Therefore, we considered the four tasks to be equally difficult.

Neuroimaging Results

Intention Factor: Intentional versus Nonintentional

The results of the main effect of Intention, that is, the intentional conditions (LCInt + XLCInt) against the nonintentional conditions (LPhC + XLPhC), are shown in Table 1A. Two distinct patterns of activation clearly emerged for communicative intention and for physical causality. The main effect of Intention yielded specific activations for the intentional conditions in the superior medial frontal gyrus (MPFC), in the left posterior middle temporal gyrus, closely corresponding (CC) to the TPJ region and in the left posterior superior temporal gyrus, CC to the pSTS region, in the right posterior superior temporal gyrus, CC to the TPJ and right middle temporal gyrus, CC to the pSTS, and in the precuneus. Several other fronto-lateral, temporal, and occipital areas were also activated, including the pars triangularis of the inferior frontal gyrus, bilaterally (Brodmann's area [BA] 45).

Table 1. 

Activations at p < .05, FWE Corrected for Multiple Comparisons

Region
Hem
(LCInt + XLCInt) > (LPhC + XLPhC)
(LPhC + XLPhC) > (LCInt + XLCInt)
k
x
y
z
Z
k
x
y
z
Z
A. Main Effect of Intention 
Superior medial frontal gyrus/MPFC L/R 816 −6 54 32 7.72      
Middle temporal gyrus/TPJ assigned to IPC (PGp), probability: 40% 12,302 −52 −64 20 >8      
Superior temporal gyrus/TPJ – 56 −46 20 >8      
Superior temporal gyrus/pSTS – −58 −50 16 >8      
Middle temporal gyrus/pSTS – 54 −64 12 >8      
Precuneus L/R 2026 −56 40 >8      
Superior frontal gyrus 77 20 46 44 6.29      
Middle frontal gyrus 367 44 40 12 >8      
Inferior frontal gyrus (pars orbitalis) 184 −44 28 −12 >8      
Inferior frontal gyrus (pars triangularis) assigned to Area 45, probability: 60% – −54 24 6.8      
Inferior frontal gyrus (pars triangularis) assigned to Area 45, probability: 60% 112 56 28 >8      
Supplementary motor area assigned to Area 6, probability: 70% L/R 55 −4 68 6.18      
Middle temporal gyrus 12,302 −58 −16 −20 >8      
– 60 −8 −24 >8      
Inferior temporal gyrus – 50 −2 −44 >8      
– 46 −46 −24 >8      
Medial temporal pole – 52 −32 >8      
Cuneus assigned to Area 18, probability: 50% 100 14 −100 12 >8      
Superior occipital gyrus assigned to Area 17, probability: 80% 71 −8 −102 7.74      
Frontal orbital gyrus      72 −24 32 −20 6.53 
     110 20 34 −20 >8 
Superior frontal gyrus      225 −20 56 >8 
     41 26 48 5.43 
Middle frontal gyrus      387 −42 36 20 >8 
     367 44 40 12 >8 
Inferior frontal gyrus (pars opercularis) assigned to Area 44, probability: 60%      639 −50 24 >8 
Inferior frontal gyrus (pars opercularis) assigned to Area 44, probability: 40%      126 52 24 6.84 
Insula      121 42 6.95 
Supramarginal gyrus assigned to IPC (PFt), probability: 60%      3311 −60 −26 32 >8 
Supramarginal gyrus assigned to IPC (PFt), probability: 30%      12,302 64 −20 36 >8 
Superior parietal lobule assigned to SPL (7A), probability: 50%      3311 −20 −64 52 >8 
Superior parietal lobule assigned to SPL (7A), probability: 90%      12,302 20 −58 64 >8 
Inferior temporal gyrus      – −54 −58 −8 >8 
     – 56 −52 −12 >8 
Fusiform gyrus      – −30 −56 −12 >8 
Fusiform gyrus assigned to area hOC4v (V4), probability: 60%      – −22 −82 −12 >8 
Fusiform gyrus      – 32 −48 −12 >8 
Lingual gyrus assigned to area hOC3v (V3v), probability: 80%      – 22 −78 −8 >8 
Lingual gyrus assigned to Area 17, probability: 100%      – −86 −8 >8 
Superior/middle occipital gyrus      3311 −24 −70 36 >8 
     12,302 26 −68 44 >8 
Middle cingulate cortex assigned to SPL (5Ci), probability: 50%      3311 −14 −28 40 >8 
Middle cingulate cortex assigned to SPL (5Ci), probability: 50%      44 16 −28 40 6.22 
Middle cingulate cortex L/R      158 −2 32 >8 
 
B. Comparison between Main Effect of Intention and Walter et al. (2004) 
Superior medial frontal gyrus/MPFC L/R 57 −6 56 28 6.68      
Middle temporal gyrus/pSTS 127 −54 −50 16 >8      
56 −42 >8 
Superior temporal gyrus/TPJ 127 −58 −46 20 >8      
Middle temporal gyrus/TPJ assigned to IPC (PGa), probability: 60% 12 60 −60 20 >8      
Precuneus R/L 90 −56 40 >8      
Inferior frontal gyrus (pars orbitalis) 13 52 32 −8 5.54      
Middle temporal gyrus 21 −54 −4 −24 >8      
52 −28 >8      
Inferior temporal gyrus 56 −18 −20 >8      
Medial temporal pole −50 −32 7.05      
Region
Hem
(LCInt + XLCInt) > (LPhC + XLPhC)
(LPhC + XLPhC) > (LCInt + XLCInt)
k
x
y
z
Z
k
x
y
z
Z
A. Main Effect of Intention 
Superior medial frontal gyrus/MPFC L/R 816 −6 54 32 7.72      
Middle temporal gyrus/TPJ assigned to IPC (PGp), probability: 40% 12,302 −52 −64 20 >8      
Superior temporal gyrus/TPJ – 56 −46 20 >8      
Superior temporal gyrus/pSTS – −58 −50 16 >8      
Middle temporal gyrus/pSTS – 54 −64 12 >8      
Precuneus L/R 2026 −56 40 >8      
Superior frontal gyrus 77 20 46 44 6.29      
Middle frontal gyrus 367 44 40 12 >8      
Inferior frontal gyrus (pars orbitalis) 184 −44 28 −12 >8      
Inferior frontal gyrus (pars triangularis) assigned to Area 45, probability: 60% – −54 24 6.8      
Inferior frontal gyrus (pars triangularis) assigned to Area 45, probability: 60% 112 56 28 >8      
Supplementary motor area assigned to Area 6, probability: 70% L/R 55 −4 68 6.18      
Middle temporal gyrus 12,302 −58 −16 −20 >8      
– 60 −8 −24 >8      
Inferior temporal gyrus – 50 −2 −44 >8      
– 46 −46 −24 >8      
Medial temporal pole – 52 −32 >8      
Cuneus assigned to Area 18, probability: 50% 100 14 −100 12 >8      
Superior occipital gyrus assigned to Area 17, probability: 80% 71 −8 −102 7.74      
Frontal orbital gyrus      72 −24 32 −20 6.53 
     110 20 34 −20 >8 
Superior frontal gyrus      225 −20 56 >8 
     41 26 48 5.43 
Middle frontal gyrus      387 −42 36 20 >8 
     367 44 40 12 >8 
Inferior frontal gyrus (pars opercularis) assigned to Area 44, probability: 60%      639 −50 24 >8 
Inferior frontal gyrus (pars opercularis) assigned to Area 44, probability: 40%      126 52 24 6.84 
Insula      121 42 6.95 
Supramarginal gyrus assigned to IPC (PFt), probability: 60%      3311 −60 −26 32 >8 
Supramarginal gyrus assigned to IPC (PFt), probability: 30%      12,302 64 −20 36 >8 
Superior parietal lobule assigned to SPL (7A), probability: 50%      3311 −20 −64 52 >8 
Superior parietal lobule assigned to SPL (7A), probability: 90%      12,302 20 −58 64 >8 
Inferior temporal gyrus      – −54 −58 −8 >8 
     – 56 −52 −12 >8 
Fusiform gyrus      – −30 −56 −12 >8 
Fusiform gyrus assigned to area hOC4v (V4), probability: 60%      – −22 −82 −12 >8 
Fusiform gyrus      – 32 −48 −12 >8 
Lingual gyrus assigned to area hOC3v (V3v), probability: 80%      – 22 −78 −8 >8 
Lingual gyrus assigned to Area 17, probability: 100%      – −86 −8 >8 
Superior/middle occipital gyrus      3311 −24 −70 36 >8 
     12,302 26 −68 44 >8 
Middle cingulate cortex assigned to SPL (5Ci), probability: 50%      3311 −14 −28 40 >8 
Middle cingulate cortex assigned to SPL (5Ci), probability: 50%      44 16 −28 40 6.22 
Middle cingulate cortex L/R      158 −2 32 >8 
 
B. Comparison between Main Effect of Intention and Walter et al. (2004) 
Superior medial frontal gyrus/MPFC L/R 57 −6 56 28 6.68      
Middle temporal gyrus/pSTS 127 −54 −50 16 >8      
56 −42 >8 
Superior temporal gyrus/TPJ 127 −58 −46 20 >8      
Middle temporal gyrus/TPJ assigned to IPC (PGa), probability: 60% 12 60 −60 20 >8      
Precuneus R/L 90 −56 40 >8      
Inferior frontal gyrus (pars orbitalis) 13 52 32 −8 5.54      
Middle temporal gyrus 21 −54 −4 −24 >8      
52 −28 >8      
Inferior temporal gyrus 56 −18 −20 >8      
Medial temporal pole −50 −32 7.05      

For each activated brain region, we report whether the left (L) or right (R) hemisphere was activated (Hem), cluster size (k), Montreal Neurological Institute space coordinates in millimeters (x,y,z), and Z score (Z). LCInt = linguistic communicative intention; XLCInt = extralinguistic communicative intention; LPhC = linguistic physical causality; XLPhC = extralinguistic physical causality. Where available, we report the name of the anatomical regions the cytoarchitectonic labels and probabilities according to the SPM Anatomy Toolbox (www.fz-juelich.de/ime/spm_anatomy_toolbox).

In turn, activations specific to the nonintentional conditions were found in a widespread network of areas, including notably the supramarginal gyrus, the superior parietal lobule, the fusiform gyrus, the middle cingulate cortex, all bilaterally, and the pars opercularis of the inferior frontal gyrus, bilaterally (BA 44).

To find the commonalities between the results of the main effect of Intention with the results of our previous study (Walter et al., 2004), we used a small volume correction procedure that restricted the analysis to a mask including the voxels of significant activation for communicative intention in the study by Walter et al. (2004). Overlapping activations with our previous data for the intentional versus nonintentional conditions were found in the superior medial frontal gyrus, CC to the MPFC, in the posterior middle temporal gyri, both on the left (CC to the pSTS) and right (CC to the pSTS and TPJ) sides, in the left posterior superior temporal gyrus (CC to the TPJ), and in the precuneus. Additional common effects were found in the right inferior frontal gyrus (pars orbitalis), in the anterior right inferior and bilateral middle temporal gyri, and in the left medial-temporal pole (Table 1B).

To identify the specific activations for the Intention factor (intentional vs. nonintentional), independently of the communicative modality used (linguistic vs. extralinguistic), we also computed a conjunction between the two simple main effects of Intention, one for each Modality level. In contrast to the main effect of Intention, the conjunction analysis should only reveal activations that were elicited by both simple main effects of Intention. The results were entirely overlapping with the main effect of Intention (Figure 2A).

Figure 2. 

Activations for Intention, Modality, and Intention × Modality. Significant effects (p < .05, FWE corrected) are displayed on cortical renderings and on sagittal (x coordinate level in millimeters) and axial (z coordinate levels in millimeters) slices of the anatomical image of one of the participants. Color codings for the activations reflect the levels of the 2 × 2 factorial design, as indicated in the rectangle inset (bottom right). (A) Null conjunction of the two simple main effects of Intention: (LCInt − LPhC) conj. (XLCInt − XLPhC): CInt > PhC (red color scale); (LPhC − LCInt) conj. (XLPhC − XLCInt): PhC > CInt (blue color scale). (B) Main effect of Modality: L > XL (green color scale); XL > L (yellow color scale). (C) Interaction Intention × Modality: significant effects in the left superior occipital gyrus and in the right cuneus were specifically elicited in the context of the linguistic mean (as symbolized by the green frame) by the intentional task only (red color scale). The mean percent signal change for the four experimental conditions in the left superior occipital gyrus is shown on the right of the axial slices, with the color of the effect size bars indicating the Intention (CInt vs. PhC) and the color of the 90% confidence intervals indicating the modality (L vs. XL).

Figure 2. 

Activations for Intention, Modality, and Intention × Modality. Significant effects (p < .05, FWE corrected) are displayed on cortical renderings and on sagittal (x coordinate level in millimeters) and axial (z coordinate levels in millimeters) slices of the anatomical image of one of the participants. Color codings for the activations reflect the levels of the 2 × 2 factorial design, as indicated in the rectangle inset (bottom right). (A) Null conjunction of the two simple main effects of Intention: (LCInt − LPhC) conj. (XLCInt − XLPhC): CInt > PhC (red color scale); (LPhC − LCInt) conj. (XLPhC − XLCInt): PhC > CInt (blue color scale). (B) Main effect of Modality: L > XL (green color scale); XL > L (yellow color scale). (C) Interaction Intention × Modality: significant effects in the left superior occipital gyrus and in the right cuneus were specifically elicited in the context of the linguistic mean (as symbolized by the green frame) by the intentional task only (red color scale). The mean percent signal change for the four experimental conditions in the left superior occipital gyrus is shown on the right of the axial slices, with the color of the effect size bars indicating the Intention (CInt vs. PhC) and the color of the 90% confidence intervals indicating the modality (L vs. XL).

Modality Factor: Linguistic versus Extralinguistic

The result of the main effect of Modality, that is, the linguistic conditions (LCInt + LPhC) against the extralinguistic conditions (XLCInt + XLPhC), is shown in Table 2A and in Figure 2B. The main effect of Modality was associated with widespread activations specific to the linguistic conditions, notably in the left inferior frontal gyrus (covering BA 44, see Figure 3, BA 45, and BA 47), in the left precentral gyrus and in the SMA, in the anterior left inferior and bilateral middle temporal gyri, in the left temporal pole and hippocampus, in the bilateral occipital cortex, and in the left cerebellum. Specific effects for the extralinguistic conditions were in turn found in the pars opercularis of the right inferior frontal gyrus (BA 44, see Figure 3), the left and right precentral, postcentral, and supramarginal gyri, in the middle cingulate cortex, and in the left and right caudate nuclei.

Table 2. 

Activations at p < .05, FWE Corrected for Multiple Comparisons

A. Main Effect of Modality
Region
Hem
(LCInt + LPhC) > (XLCInt + XLPhC)
(XLCInt + XLPhC) > (LCInt + LPhC)
k
x
y
z
Z
k
x
y
z
Z
Inferior frontal gyrus (pars triangularis) assigned to Area 45, probability: 60% 348 −50 26 12 5.66      
Inferior frontal gyrus (pars triangularis) assigned to Area 44, probability: 40% – −42 12 24 >8      
Inferior frontal gyrus (pars orbitalis) – −46 30 −12 6.69      
Precentral gyrus assigned to Area 6, probability: 40% 138 −50 −8 44 >8      
Supplementary motor area assigned to Area 6, probability: 70% L/R 11 −4 68 5.41      
Middle temporal gyrus 1672 −60 −4 −12 >8      
456 62 −4 −16 >8      
Inferior temporal gyrus −50 −54 −16 5.1      
Temporal pole 1672 −50 16 −24 >8      
Hippocampus assigned to hippocampus (CA), probability: 80% 12 −34 −18 −16 5.47      
Superior occipital gyrus assigned to Area 17, probability: 70% 1338 −6 −100 >8      
Lingual gyrus assigned to Area hOC3v (V3v), probability: 50% – −14 −90 −12 >8      
Lingual gyrus assigned to Area hOC3v (V3v), probability: 60% – 14 −76 −8 6.13      
Calcarine gyrus assigned to Area 18, probability: 50% – 10 −96 12 >8      
Cerebellum 125 −36 −40 −28 >8      
Inferior frontal gyrus (pars opercularis) assigned to Area 44, probability: 60%      31 50 20 5.22 
Precentral gyrus assigned to Area 6, probability: 50%      38 −28 −14 60 5.96 
Precentral gyrus assigned to Area 6, probability: 70%      24 −12 64 4.95 
Postcentral gyrus assigned to Area 2, probability: 50%      900 −36 −40 56 >8 
Postcentral gyrus assigned to Area 2, probability: 50%      1595 58 −22 44 >8 
Supramarginal gyrus assigned to IPC (PFt), probability: 60%      900 −54 −26 36 >8 
Supramarginal gyrus assigned to IPC (PFcm), probability: 50%      1595 54 −30 28 >8 
Superior temporal gyrus      77 −52 −14 5.96 
Middle temporal gyrus assigned to hOC5 (V5), probability: 50%      541 48 −66 >8 
Inferior occipital gyrus assigned to hOC5 (V5), probability: 10%      252 −48 −76 −4 >8 
Middle cingulate cortex assigned to SPL (5Ci), probability: 30%      24 −30 40 5.28 
Caudate nucleus      28 −12 18 −4 5.65 
     55 16 22 −4 6.03 
 
B. Conjunction of the Two Simple Main Effects of Modality 
Region Hem (LCInt − XLCInt) conj. (LPhC − XLPhC): L > XL (XLCInt-LCInt) conj. (XLPhC − LPhC): XL > L 
k x y z Z k x y z Z 
Inferior frontal gyrus (pars opercularis) assigned to Area 44, probability: 30% 18 −44 12 24 5.57      
Precentral gyrus assigned to Area 6, probability: 40% 11 −50 −8 44 5.34      
Middle temporal gyrus 500 −60 −4 −12 >8      
36 66 −8 −16 6.11      
Temporal pole 500 −46 18 −28 5.97      
Superior occipital gyrus assigned to Area 17, probability: 20% −16 −94 12 4.81      
Lingual gyrus assigned to Area hOC3v (V3v), probability: 50% 21 −14 −90 −12 6.35      
Cerebellum 26 −36 −40 −28 6.02      
Postcentral gyrus assigned to Area 2, probability: 50%      66 −28 −44 56 5.7 
Postcentral gyrus assigned to Area 2, Probability: 90%      123 36 −40 52 6.72 
Supramarginal gyrus assigned to IPC (PF), probability: 60%      58 −62 −28 32 5.78 
Supramarginal gyrus assigned to IPC (PFcm), probability: 50%      179 54 −30 28 6.68 
Superior parietal lobule assigned to SPL (7PC), probability: 50%      28 −50 60 4.93 
Inferior temporal gyrus assigned to hOC5 (V5), probability: 20%      137 50 −68 −4 6.69 
Inferior occipital gyrus assigned to hOC5 (V5), probability: 20%      83 −48 −74 −4 7.04 
Caudate nucleus      18 20 −4 5.05 
A. Main Effect of Modality
Region
Hem
(LCInt + LPhC) > (XLCInt + XLPhC)
(XLCInt + XLPhC) > (LCInt + LPhC)
k
x
y
z
Z
k
x
y
z
Z
Inferior frontal gyrus (pars triangularis) assigned to Area 45, probability: 60% 348 −50 26 12 5.66      
Inferior frontal gyrus (pars triangularis) assigned to Area 44, probability: 40% – −42 12 24 >8      
Inferior frontal gyrus (pars orbitalis) – −46 30 −12 6.69      
Precentral gyrus assigned to Area 6, probability: 40% 138 −50 −8 44 >8      
Supplementary motor area assigned to Area 6, probability: 70% L/R 11 −4 68 5.41      
Middle temporal gyrus 1672 −60 −4 −12 >8      
456 62 −4 −16 >8      
Inferior temporal gyrus −50 −54 −16 5.1      
Temporal pole 1672 −50 16 −24 >8      
Hippocampus assigned to hippocampus (CA), probability: 80% 12 −34 −18 −16 5.47      
Superior occipital gyrus assigned to Area 17, probability: 70% 1338 −6 −100 >8      
Lingual gyrus assigned to Area hOC3v (V3v), probability: 50% – −14 −90 −12 >8      
Lingual gyrus assigned to Area hOC3v (V3v), probability: 60% – 14 −76 −8 6.13      
Calcarine gyrus assigned to Area 18, probability: 50% – 10 −96 12 >8      
Cerebellum 125 −36 −40 −28 >8      
Inferior frontal gyrus (pars opercularis) assigned to Area 44, probability: 60%      31 50 20 5.22 
Precentral gyrus assigned to Area 6, probability: 50%      38 −28 −14 60 5.96 
Precentral gyrus assigned to Area 6, probability: 70%      24 −12 64 4.95 
Postcentral gyrus assigned to Area 2, probability: 50%      900 −36 −40 56 >8 
Postcentral gyrus assigned to Area 2, probability: 50%      1595 58 −22 44 >8 
Supramarginal gyrus assigned to IPC (PFt), probability: 60%      900 −54 −26 36 >8 
Supramarginal gyrus assigned to IPC (PFcm), probability: 50%      1595 54 −30 28 >8 
Superior temporal gyrus      77 −52 −14 5.96 
Middle temporal gyrus assigned to hOC5 (V5), probability: 50%      541 48 −66 >8 
Inferior occipital gyrus assigned to hOC5 (V5), probability: 10%      252 −48 −76 −4 >8 
Middle cingulate cortex assigned to SPL (5Ci), probability: 30%      24 −30 40 5.28 
Caudate nucleus      28 −12 18 −4 5.65 
     55 16 22 −4 6.03 
 
B. Conjunction of the Two Simple Main Effects of Modality 
Region Hem (LCInt − XLCInt) conj. (LPhC − XLPhC): L > XL (XLCInt-LCInt) conj. (XLPhC − LPhC): XL > L 
k x y z Z k x y z Z 
Inferior frontal gyrus (pars opercularis) assigned to Area 44, probability: 30% 18 −44 12 24 5.57      
Precentral gyrus assigned to Area 6, probability: 40% 11 −50 −8 44 5.34      
Middle temporal gyrus 500 −60 −4 −12 >8      
36 66 −8 −16 6.11      
Temporal pole 500 −46 18 −28 5.97      
Superior occipital gyrus assigned to Area 17, probability: 20% −16 −94 12 4.81      
Lingual gyrus assigned to Area hOC3v (V3v), probability: 50% 21 −14 −90 −12 6.35      
Cerebellum 26 −36 −40 −28 6.02      
Postcentral gyrus assigned to Area 2, probability: 50%      66 −28 −44 56 5.7 
Postcentral gyrus assigned to Area 2, Probability: 90%      123 36 −40 52 6.72 
Supramarginal gyrus assigned to IPC (PF), probability: 60%      58 −62 −28 32 5.78 
Supramarginal gyrus assigned to IPC (PFcm), probability: 50%      179 54 −30 28 6.68 
Superior parietal lobule assigned to SPL (7PC), probability: 50%      28 −50 60 4.93 
Inferior temporal gyrus assigned to hOC5 (V5), probability: 20%      137 50 −68 −4 6.69 
Inferior occipital gyrus assigned to hOC5 (V5), probability: 20%      83 −48 −74 −4 7.04 
Caudate nucleus      18 20 −4 5.05 

For each activated brain region, we report whether the left (L) or right (R) hemisphere was activated (Hem), cluster size (k), Montreal Neurological Institute space coordinates in millimeters (x,y,z), and Z score (Z). LCInt = linguistic communicative intention; XLCInt = extralinguistic communicative intention; LPhC = linguistic physical causality; XLPhC = extralinguistic physical causality. Where available, we report the name of the anatomical regions the cytoarchitectonic labels and probabilities according to the SPM Anatomy Toolbox (www.fz-juelich.de/ime/spm_anatomy_toolbox).

Figure 3. 

Cytoarchitectonic probability of activations in BA 44. Areas of activation (p < .05, FWE corrected) in the pars opercularis of the inferior frontal gyri specific for the linguistic modality (green color scale) and for the extralinguistic modality (yellow color scale) are displayed on axial slices of the anatomical image of one of the participants (z coordinate levels in mm). The areas of activation are superimposed on cytoarchitectonic probability maps (dark blue–light cyan color, according to % probability) for BA 44 (Amunts et al., 1999).

Figure 3. 

Cytoarchitectonic probability of activations in BA 44. Areas of activation (p < .05, FWE corrected) in the pars opercularis of the inferior frontal gyri specific for the linguistic modality (green color scale) and for the extralinguistic modality (yellow color scale) are displayed on axial slices of the anatomical image of one of the participants (z coordinate levels in mm). The areas of activation are superimposed on cytoarchitectonic probability maps (dark blue–light cyan color, according to % probability) for BA 44 (Amunts et al., 1999).

Following the same line of reasoning as for the main effect of Intention, we also computed a conjunction between the two simple main effects of Modality. Although spatially more restricted, the activation patterns for the two levels of the Modality factor (linguistic vs. extralinguistic) independently of the type of task (intentional vs. nonintentional) were comparable with those of the main effect of Modality (Table 2B).

Interaction Effect of Factors: Intention × Modality

The Intention × Modality interaction effect, that is, (LCInt − LPhC) − (XLCInt − XLPhC) yielded significant activations in the left superior occipital gyrus (BA 17) and in the right cuneus (BA 18). By inspecting the directionality of the estimates for each of the four experimental conditions, we could conclude that the interaction effects in both brain regions were driven by a selective signal increase for the LCInt condition (Table 3 and Figure 2C). In other words, these brain regions were selectively modulated by the processing of the intentional task, but only within a linguistic and not within an extralinguistic context. No other interaction effects were found. Thus, it can be concluded that there were no specific effects for the XLCInt, LPhC, and XLPhC conditions over and beyond those shared with the other experimental conditions, that is, when these were grouped either by intentionality (main effect of Intention) or by Modality (main effect of modality).

Table 3. 

Activations at p < .05, FWE Corrected for Multiple Comparisons

Interaction Intention × Modality
Hem
(LCInt − XLCInt) > (LPhC − XLPhC)
Direction of Effect
Region
k
x
y
z
Z
Superior occipital gyrus assigned to Area 17, probability: 60% 108 −8 −100 12 7.51 LCInt > (XLCInt, LPhC, XLPhC) 
Cuneus assigned to Area 18, probability: 60% 62 10 −96 16 7.63 LCInt > (XLCInt, LPhC, XLPhC) 
Interaction Intention × Modality
Hem
(LCInt − XLCInt) > (LPhC − XLPhC)
Direction of Effect
Region
k
x
y
z
Z
Superior occipital gyrus assigned to Area 17, probability: 60% 108 −8 −100 12 7.51 LCInt > (XLCInt, LPhC, XLPhC) 
Cuneus assigned to Area 18, probability: 60% 62 10 −96 16 7.63 LCInt > (XLCInt, LPhC, XLPhC) 

For each activated brain region, we report whether the left (L) or the right (R) hemisphere was activated (Hem), cluster size (k), Montreal Neurological Institute space coordinates in millimeters (x,y,z), Z score (Z), and the direction of the effects for the different interaction terms. LCInt = linguistic communicative intention; XLCInt = extralinguistic communicative intention; LPhC = linguistic physical causality; XLPhC = extralinguistic physical causality. Where available, we report the name of the anatomical regions the cytoarchitectonic labels and probabilities according to the SPM Anatomy Toolbox (www.fz-juelich.de/ime/spm_anatomy_toolbox).

DISCUSSION

The present study aimed to analyze the brain responses involved in communicative intention processing. Even if the neural bases of human communication have been widely investigated for different aspects of the linguistic encoding process, including the syntactic and the semantic levels (Bookheimer, 2002; Kaan & Swaab, 2002), few studies had the level at which a communicative intention is inferred and comprehended within a specific social context, that is, the pragmatic level, as their main focus. Furthermore, no studies have until now directly compared the comprehension of the same communicative intention generated, in the same context, through linguistic means versus extralinguistic gestural means (for a different and interesting approach to the relationship between language and communication, see Willems et al., 2010).

Our results clearly showed that a common neural network is engaged in communicative intention processing independently of the modality used. Moreover, we found two additional brain networks that were specifically recruited depending on the modality used, that is, a peri-sylvian language network for the linguistic modality and a sensorimotor network for the extralinguistic modality.

Communicative Intention Processing

The first issue we wished to address was whether the brain areas involved in the IPN, that is, the right and left pSTS, the right and left TPJ, the precuneus, and the MPFC, are commonly recruited regardless of the modality used to convey a communicative intention.

The main effect of Intention of the present study (intentional vs. nonintentional conditions) showed that communicative intention stimuli elicited higher activation in the brain areas belonging to the IPN compared with the nonintentional stimuli. A conjunction between the two simple main effects of Intention shows that these areas are recruited by both the LCInt and the XLCInt conditions. Thus, IPN activation may be considered as modality independent. Furthermore, we also demonstrated, on the basis of a specific a priori experimental hypothesis, that the anatomical location of the right and left pSTS, right and left TPJ, precuneus, and MPFC activations found in the present study closely overlap with the location of the IPN activations found in Walter et al. (2004). These results clearly demonstrate that intention processing within a communicative context is independent of the communicative means through which the communicative intention is conveyed and shared.

Distinct from other kinds of intention, that is, individual intentions, communicative intention possesses an intrinsically recursive nature: Sharing a communicative intention with a partner means sharing the intention to communicate a meaning to someone else plus intending that the former intention is recognized by the addressee. In the previous studies (Walter et al., 2009; Ciaramidaro et al., 2007; Walter et al., 2004), we found that while the processing of individual intentions recruits only the precuneus and the right TPJ, the processing of communicative intentions also recruits the left TPJ and the MPFC. Recent meta-analyses have provided convergent evidence for the role of these brain areas in intention processing (Van Overwalle, 2009; Van Overwalle & Baetens, 2009). These works described the precuneus as crucial for the elaboration of contextual information and for the identification of the situational structure and the TPJ as involved during the identification of the end state of behaviors. In particular, the TPJ along with the precuneus and the MPFC takes part in a larger process of goal identification in a social context. As regards the MPFC, Van Overwalle (2009) emphasized the impressive amount and consistency of the empirical evidence concerning the engagement of this area in social inferences. Interestingly, the MPFC seems to play a crucial role in understanding social scripts that do not concern a single actor but that describe adequate social actions for all the actors involved in a particular social context.

The kind of communicative intention processing herein explored falls within the general domain of intention processing in social cognition, that is, what Frith and Frith (2006) called the “what happened next” in a social context. Van Overwalle and Baetens (2009) discussed which brain areas are responsible for understanding the actions of others and their underlying goals. The authors proposed a model that ordered actions and goals hierarchically according to their level of abstractness, discriminating between immediate goals that reflect the understanding of basic actions and long-term intentions that reflect the “why” of an action in a social context. The results of their meta-analysis provided additional evidence consistent with the role of the IPN investigated here: Although the understanding of basic actions requires the mirror neuron system, the understanding of social actions requires the concurrent activation of the pSTS and TPJ areas as well as the precuneus and the MPFC.

Several studies have investigated the reciprocal influences and the cross-modal interactions between speech and emblem gestures (Barbieri, Buonocore, Volta, & Gentilucci, 2009; Kircher et al., 2009; Xu, Gannon, Emmorey, Smith, & Braun, 2009; Bernardis & Gentilucci, 2006; Gunter & Bach, 2004; Nakamura et al., 2004). These studies have shown that the integration between meaningful gestures and speech occurs at a high semantic level and that the combination of both expressive means leads to effects that are not simply a linear combination of perceiving either emblems or speech alone (Barbieri et al., 2009; Bernardis & Gentilucci, 2006; Gentilucci, Bernardis, Crisi, & Dalla Volta, 2006). On the whole, the prevailing view in the literature is that speech and gesture share a unified decoding process at the semantic level. To the best of our knowledge, no previous studies have addressed the question of common processes for speech and gestures at the pragmatic level, such as in communicative intention processing. In this respect, we propose that linguistic and gestural modalities could share a common communicative competence also at the level at which an actor's communicative intention has to be reconstructed by a partner.

Communicative Modalities

The second issue we wished to address was whether any additional brain areas other than those involved at the pragmatic level, that is, the IPN areas, are specifically recruited depending on the modality used.

An open question in the literature concerns the role of expressive modalities in communication (e.g., Willems & Hagoort, 2007; Bates & Dick, 2002; Goldin-Meadow, 1999). As highlighted by recent works that analyzed the interaction of linguistic and gestural communicative modalities (see previous section), two competing views are debated regarding the relationship between gesture and speech. One view postulates that gesture and speech are controlled by two different communication systems (Krauss & Hadar, 1999; Hadar, Wenkert-Olenik, & Soroker, 1998; Petitto, 1987; Levelt, Richardson, & La Heij, 1985), whereas the other view (Bara, 2010; Tomasello, 2008; Fadiga & Craighero, 2006; Kendon, 2004; McNeill, 1992) postulates that gesture and speech form a single system of communication because they are linked to the same mental processes despite differing in expression modality.

These competing views seem to be at least partially reconciled by recent neuroimaging evidence. In an fMRI study, Xu et al. (2009) showed their participants' video clips featuring a single person performing symbolic gestures, such as emblems, and the corresponding spoken glosses conveying the same semantic content. They found that communicative symbolic gestures and the corresponding spoken glosses, although activating distinct modality-specific areas in the inferior and superior temporal cortices respectively, both engaged to the same extent the left inferior frontal cortex and the posterior temporal cortex bilaterally. These results speak for a shared semantic representation, which may be accessed by distinct modality-specific sensory gateways, according to the linguistic or gestural format of the input.

On the basis of a different task showing communicative interactions featuring two persons, our results may also be viewed as a possible reconciliation, at the pragmatic level, of the two previously mentioned opposite views. Thus, although Xu et al. (2009) suggested that the anterior and the posterior peri-sylvian areas may function as a modality-independent semantic network linking meaning with symbols (both words and gestures), our data suggest that the right and left pSTS, right and left TPJ, precuneus, and MPFC may function as a modality-independent intentionality network, linking meaning with the communicative intention the actor is pursuing with that meaning.

This common representation of communicative intention may then be accessed by modality-specific gateways, which are distinct for linguistic versus extralinguistic expressive means. Here, the two networks constituting the modality-specific gateways do not just reflect the type of sensory input (written vs. iconic), but very much also the higher order cognitive versus sensorimotor neural processes for the integration of such sensory inputs. As a matter of fact, discerning the meaning of a communicative action requires more than just comprehending the meaning of the words or gestures: It also requires the capacity to represent other people's mental states underlying a social action. Hence, the main effect of modality demonstrated that compared with stimuli in the extralinguistic modality, stimuli in the linguistic modality elicited higher activations in classical language areas, including the left peri-sylvian fronto-temporal cortex, extending into the contralateral right anterior middle temporal cortex. Vice versa, stimuli in the extralinguistic modality specifically elicited higher activations in sensorimotor and premotor areas, bilaterally, and also bilaterally in area V5, coding for biological movement (Malikovic et al., 2007).

A debated question concerning classical language areas is the role of Broca's area in linguistic and extralinguistic dimensions. Our data showed an interesting modality-specific activation pattern in the pars opercularis of the left inferior frontal cortex, with additional bilateral activation, in the left hemisphere for linguistic, and in the right hemisphere for extralinguistic stimuli (Figure 3). This symmetrical activation pattern in BA 44 is remindful of some evidence provided by previous neuroimaging studies that contrasted the processing of linguistic versus extralinguistic (although not specifically of a gestural kind) stimuli. In particular, a study by Tettamanti et al. (2002) showed that the acquisition of novel syntactic rules reflecting universal linguistic principles selectively activates several left peri-sylvian regions, including Broca's area pars opercularis (BA 44), compared with syntactic rules following nonlinguistic principles (i.e., violating linguistic universals). In turn, the right homologous of Broca's area was activated by both linguistic and nonlinguistic rules if compared with a cognitive baseline and more by nonlinguistic than by linguistic rules in the direct comparison. These original findings have been confirmed by other neuroimaging studies, with variations of both the grammatical rules of reference (Musso et al., 2003) and the cognitive domain (Tettamanti et al., 2009). However, it must be noted that whereas the preferential activation of the left BA 44 for linguistic versus nonlinguistic syntactic rules has been consistently reported in a number of further studies (e.g., Bahlmann, Schubotz, Mueller, Koester, & Friederici, 2009; Bahlmann, Schubotz, & Friederici, 2008; Friederici, Bahlmann, Heim, Schubotz, & Anwander, 2006; Hoen, Pachot-Clouard, Segebarth, & Dominey, 2006), the preferential activation of the right BA 44 for nonlinguistic versus linguistic rules has either not been found or not been investigated.

If the involvement of the right homologous region of Broca's area pars opercularis is to be seen as part of the cognitive processing network for extralinguistically structured sequences of symbols and, with respect to the present work, more specifically also for symbolic communicative gestures, we should expect to find the right BA 44 to be consistently involved in hand action and gesture processing. This is indeed the case for the observation of hand actions (e.g., Buccino et al., 2001), especially in relation to their outcome (Hamilton & Grafton, 2008; Schubotz & von Cramon, 2004). Generally, the left ventral premotor BA 6/44 complex is also involved in these task, but the lack of a formal comparison with corresponding linguistic conditions, as we have performed here, does not allow the question of hemispheric dominance for the two modalities to be tackled. Interestingly, recent studies (Dick, Goldin-Meadow, Hasson, Skipper, & Small, 2009; Green et al., 2009; Straube, Green, Weis, Chatterjee, & Kircher, 2009) found that the right inferior frontal gyrus displayed more activity when hand movements were semantically unrelated than when they were related to the accompanying speech. Altogether, these findings may be compatible with the results of our study, in which the participants were required to choose a compatible outcome for the intention revealed by the extralinguistic communicative gesture, and discard the noncompatible outcome.

Interaction between Communicative Intention Processing and Communicative Modalities

Finally, a crucial question is whether communicative intentions are processed differently in the linguistic and the extralinguistic modalities. An answer to this question was provided by our analysis of Intention × Modality interaction, which showed that the interaction effects were limited to two bilateral occipital activations (BA 17 and BA 18), that is, outside both the core peri-sylvian language network activated by linguistic stimuli and the sensorimotor network activated by extralinguistic stimuli. Interestingly, these interactions in the occipital cortex were mainly driven by a selective signal increase for the LCInt condition.

As we had no a priori hypothesis about the interaction effects, the interpretation of this finding can only be speculative. The activation involved associative and early visual cortices. One hypothesis is that the linguistic condition required an additional engagement of visual imagery processes (Kosslyn, Ganis, & Thompson, 2001) to represent the visual scene required for the understanding of communicative intentions. In addition, it is noteworthy that the occipital cortex is typically involved by ToM tasks. A systematic quantitative meta-analysis of ToM studies has been recently reported by Spreng, Mar, and Kim (2009). Using the activation likelihood estimation approach to analyze 30 ToM neuroimaging studies, the authors found the occipital cortices to be part of the ToM core network. The activation of regions early in the visual stream during ToM tasks has been related to the analysis of visual-kinetic information about intention to act, a crucial component of the perception of animacy (Castelli, Happé, Frith, & Frith, 2000). The directionality of the interaction effects in the bilateral occipital cortex [LCInt > (XLCInt, LPhC, XLPhC)] may be thus the result of the additional mental imagery processes required by the LCInt condition. In LCInt, a sentence in written form replaced the depiction of a communicative interaction between the two agents, and this may have induced the experimental subjects to represent through mental imagery the modifications of the visual scene implied by the message. This process may also imply an increase in attentional focus on the details of the picture, which is not required by the extralinguistic task. This may result in an enhancement of occipital activation (Kastner, Pinsk, De Weerd, Desimone, & Ungerleider, 1999).

Physical Causality Processing

Unlike in other species, human cognition is formed by the ability to comprehend external events in terms of intentional or causal mediated forces to explain human or physical observed behaviors, that is, intentional relations between agents or causal relations between objects, and also to predict their outcomes (Tomasello, 1999; Humphrey, 1976).

The present work was not mainly concerned with the clarification of the neural correlates underlying the processing of PhC conditions, which, following the same approach as in our previous study (Walter et al., 2004), was treated as a matched control condition within a cognitive subtraction logic. Accordingly, no specific a priori hypotheses on the neural correlates of the PhC conditions have been specified. Here we shall only briefly point out that the activation pattern emerging from the main effect of Intention (PhC > CInt) appears to be compatible with the view that the PhC task engaged attentional and predictive neural processes necessary for the understanding of causal relations between physical objects. In particular, the task presumably engaged visual, spatial, and attentional processing, as indicated by the presence of bilateral activations in the posterior extrastriate cortex, in the fusiform gyri, in the superior parietal lobules, in the supramarginal gyri, in the superior frontal gyri (corresponding to the anatomical location of the FEFs), and in the lateral pFC (see, e.g., Egner et al., 2008). Additional activations were found particularly in the inferior frontal gyri, bilaterally, corresponding to BA 44, which may be crucial, as already discussed earlier (Hamilton & Grafton, 2008; Schubotz & von Cramon, 2004), for the prediction of the outcome of the physical event either described in written form or depicted.

Conclusions

Human communicative competence is based on the ability to process a specific class of mental states, namely, communicative intention. Communicative intention represents the primary mental state to be dealt with in explaining other people's communicative actions. Although we may speak of a single agent when we refer to actions in general, when we enter the domain of communication we must always refer to at least one actor and a partner to whom the act is directed. For this reason, the present study examined a full communicative exchange, that is, a communicative action proposed by an actor and the concurrent reaction to that action by a partner.

Our findings demonstrate that a common brain network is recruited for the comprehension of communicative intention, independent of the modality through which it is conveyed. This common brain network, the IPN, includes the precuneus, the left and right pSTS and TPJ, and the MPFC. Additional brain areas outside those involved in intention processing are specifically engaged by the particular communicative modality, most likely reflecting distinct modality-specific sensory gateways to the IPN. Peri-sylvian language areas are recruited by the linguistic modality, whereas sensorimotor and premotor areas are recruited by the extralinguistic modality.

Taken together, our results indicate that the information acquired by different communicative modalities is equivalent from a mental processing standpoint, in particular, at the point at which the actor's communicative intention has to be reconstructed. Thus, communicative intention processing may build on such different sources of information as language and gestures to infer the underlying intended meaning.

Acknowledgments

The authors are grateful to Amanda Miller Amberber (Macquarie Centre for Cognitive Science, Macquarie University, Sydney), Valentina Bambini (Scuola Normale Superiore, Pisa), Cristina Becchio, Gabriella Airenti (Center for Cognitive Science, University of Turin), and the reviewers for their helpful comments. This work was supported by the University of Turin (Ricerca scientifica finanziata dall'Università 2008 “Correlati cognitivi e neurali della cognizione sociale”) and by Regione Piemonte (Project IIINBEMA: Institutions, Behaviour and Markets in Local and Global Settings).

Reprint requests should be sent to Mauro Adenzato, Department of Psychology, Center for Cognitive Science, University of Turin, via Po, 14-10123 Turin, Italy, or via e-mail: mauro.adenzato@unito.it.

REFERENCES

REFERENCES
Adenzato
,
M.
, &
Bucciarelli
,
M.
(
2008
).
Recognition of mistakes and deceits in communicative interactions.
Journal of Pragmatics
,
40
,
608
629
.
Airenti
,
G.
(
2010
).
Is a naturalistic theory of communication possible?
Cognitive Systems Research
,
11
,
165
180
.
Airenti
,
G.
,
Bara
,
B. G.
, &
Colombetti
,
M.
(
1993
).
Conversation and behavior games in the pragmatics of dialogue.
Cognitive Science
,
17
,
197
256
.
Amodio
,
D. M.
, &
Frith
,
C. D.
(
2006
).
Meeting of minds: The medial frontal cortex and social cognition.
Nature Reviews Neuroscience
,
7
,
268
277
.
Amunts
,
K.
,
Schleicher
,
A.
,
Burgel
,
U.
,
Mohlberg
,
H.
,
Uylings
,
H. B. M.
, &
Zilles
,
K.
(
1999
).
Broca's region revisited: Cytoarchitecture and intersubject variability.
Journal of Comparative Neurology
,
412
,
319
341
.
Austin
,
J. L.
(
1962
).
How to do things with words.
Oxford
:
Oxford University Press
.
Bahlmann
,
J.
,
Schubotz
,
R. I.
, &
Friederici
,
A. D.
(
2008
).
Hierarchical artificial grammar processing engages Broca's area.
Neuroimage
,
42
,
525
534
.
Bahlmann
,
J.
,
Schubotz
,
R. I.
,
Mueller
,
J. L.
,
Koester
,
D.
, &
Friederici
,
A. D.
(
2009
).
Neural circuits of hierarchical visuo-spatial sequence processing.
Brain Research
,
1298
,
161
170
.
Bara
,
B. G.
(
2010
).
Cognitive pragmatics.
Cambridge, MA
:
MIT Press
.
Barbieri
,
F.
,
Buonocore
,
A.
,
Volta
,
R. D.
, &
Gentilucci
,
M.
(
2009
).
How symbolic gestures and words interact with each other.
Brain and Language
,
110
,
1
11
.
Baron-Cohen
,
S.
(
1995
).
Mindblindness: An essay on autism and theory of mind.
Cambridge, MA
:
MIT Press
.
Bates
,
E.
, &
Dick
,
F.
(
2002
).
Language, gesture, and the developing brain.
Developmental Psychobiology
,
40
,
293
310
.
Becchio
,
C.
,
Adenzato
,
M.
, &
Bara
,
B. G.
(
2006
).
How the brain understands intention: Different neural circuits identify the componential features of motor and prior intentions.
Consciousness and Cognition
,
15
,
64
74
.
Bernardis
,
P.
, &
Gentilucci
,
M.
(
2006
).
Speech and gesture share the same communication system.
Neuropsychologia
,
44
,
178
190
.
Bookheimer
,
S.
(
2002
).
Functional MRI of language: New approaches to understanding the cortical organization of semantic processing.
Annual Review of Neuroscience
,
25
,
151
188
.
Brune
,
M.
, &
Brune-Cohrs
,
U.
(
2006
).
Theory of mind—Evolution, ontogeny, brain mechanisms and psychopathology.
Neuroscience and Biobehavioral Reviews
,
30
,
437
455
.
Buccino
,
G.
,
Binkofski
,
F.
,
Fink
,
G.
,
Fadiga
,
L.
,
Fogassi
,
L.
,
Gallese
,
V.
,
et al
(
2001
).
Action observation activates premotor and parietal areas in a somatotopic manner: An fMRI study.
European Journal of Neuroscience
,
13
,
400
404
.
Carrington
,
S. J.
, &
Bailey
,
A. J.
(
2009
).
Are there theory of mind regions in the brain? A review of the neuroimaging literature.
Human Brain Mapping
,
30
,
2313
2335
.
Castelli
,
F.
,
Happé
,
F.
,
Frith
,
U.
, &
Frith
,
C. D.
(
2000
).
Movement and mind: A functional imaging study of perception and interpretation of complex intentional movement patterns.
Neuroimage
,
12
,
314
325
.
Ciaramidaro
,
A.
,
Adenzato
,
M.
,
Enrici
,
I.
,
Erk
,
S.
,
Pia
,
L.
,
Bara
,
B. G.
,
et al
(
2007
).
The intentional network: How the brain reads varieties of intentions.
Neuropsychologia
,
45
,
3105
3113
.
Dick
,
A. S.
,
Goldin-Meadow
,
S.
,
Hasson
,
U.
,
Skipper
,
J. I.
, &
Small
,
S. L.
(
2009
).
Co-speech gestures influence neural activity in brain regions associated with processing semantic information.
Human Brain Mapping
,
30
,
3509
3526
.
Egner
,
T.
,
Monti
,
J. M. P.
,
Trittschuh
,
E. H.
,
Wieneke
,
C. A.
,
Hirsch
,
J.
, &
Mesulam
,
M.
(
2008
).
Neural integration of top–down spatial and feature-based information in visual search.
Journal of Neuroscience
,
28
,
6141
6151
.
Fadiga
,
L.
, &
Craighero
,
L.
(
2006
).
Hand actions and speech representation in Broca's area.
Cortex
,
42
,
486
490
.
Friederici
,
A. D.
,
Bahlmann
,
J.
,
Heim
,
S.
,
Schubotz
,
R. I.
, &
Anwander
,
A.
(
2006
).
The brain differentiates human and non-human grammars: Functional localization and structural connectivity.
Proceedings of the National Academy of Sciences, U.S.A.
,
103
,
2458
2463
.
Frith
,
C. D.
, &
Frith
,
U.
(
2006
).
How we predict what other people are going to do.
Brain Research
,
1079
,
36
46
.
Gentilucci
,
M.
,
Bernardis
,
P.
,
Crisi
,
G.
, &
Dalla Volta
,
R.
(
2006
).
Repetitive transcranial magnetic stimulation of Broca's area affects verbal responses to gesture observation.
Journal of Cognitive Neuroscience
,
18
,
1059
1074
.
Goldin-Meadow
,
S.
(
1999
).
The role of gesture in communication and thinking.
Trends in Cognitive Sciences
,
3
,
419
429
.
Green
,
A.
,
Straube
,
B.
,
Weis
,
S.
,
Jansen
,
A.
,
Willmes
,
K.
,
Konrad
,
K.
,
et al
(
2009
).
Neural integration of iconic and unrelated coverbal gestures: A functional MRI study.
Human Brain Mapping
,
30
,
3309
3324
.
Grice
,
H. P.
(
1975
).
Logic and conversation.
In P. Cole & J. L. Morgan (Eds.),
Syntax and semantics. Speech acts
(pp.
41
58
).
New York
:
Academic Press
.
Grill-Spector
,
K.
,
Henson
,
R.
, &
Martin
,
A.
(
2006
).
Repetition and the brain: Neural models of stimulus-specific effects.
Trends in Cognitive Sciences
,
10
,
14
23
.
Gunter
,
T. C.
, &
Bach
,
P.
(
2004
).
Communicating hands: ERPs elicited by meaningful symbolic hand postures.
Neuroscience Letters
,
372
,
52
56
.
Hadar
,
U. D.
,
Wenkert-Olenik
,
R. K.
, &
Soroker
,
N.
(
1998
).
Gesture and the processing of speech: Neuropsychological evidence.
Brain and Language
,
62
,
107
126
.
Hamilton
,
A. F. D. C.
, &
Grafton
,
S. T.
(
2008
).
Action outcomes are represented in human inferior frontoparietal cortex.
Cerebral Cortex
,
18
,
1160
1168
.
Hoen
,
M.
,
Pachot-Clouard
,
M.
,
Segebarth
,
C.
, &
Dominey
,
P. F.
(
2006
).
When Broca experiences the Janus syndrome: An ER-fMRI study comparing sentence comprehension and cognitive sequence processing.
Cortex
,
42
,
605
623
.
Humphrey
,
N. K.
(
1976
).
The social function of intellect.
In P. P. G. Bateson & R. A. Hinde (Eds.),
Growing points in ethology
(pp.
303
317
).
Cambridge, UK
:
Cambridge University Press
.
Kaan
,
E.
, &
Swaab
,
T. Y.
(
2002
).
The brain circuitry of syntactic comprehension.
Trends in Cognitive Sciences
,
6
,
350
356
.
Kampe
,
K. K. W.
,
Frith
,
C. D.
, &
Frith
,
U.
(
2003
).
“Hey John”: Signals conveying communicative intention toward the self activate brain regions associated with “mentalizing,” regardless of modality.
Journal of Neuroscience
,
23
,
5258
5263
.
Kastner
,
S.
,
Pinsk
,
M. A.
,
De Weerd
,
P.
,
Desimone
,
R.
, &
Ungerleider
,
L. G.
(
1999
).
Increased activity in human visual cortex during directed attention in the absence of visual stimulation.
Neuron
,
22
,
751
761
.
Kendon
,
A.
(
2004
).
Gesture: Visible action as utterance.
Cambridge
:
Cambridge University Press
.
Kircher
,
T.
,
Straube
,
B.
,
Leube
,
D.
,
Weis
,
S.
,
Sachs
,
O.
,
Willmes
,
K.
,
et al
(
2009
).
Neural interaction of speech and gesture: Differential activations of metaphoric co-verbal gestures.
Neuropsychologia
,
47
,
169
179
.
Kosslyn
,
S. M.
,
Ganis
,
G.
, &
Thompson
,
W. L.
(
2001
).
Neural foundations of imagery.
Nature Reviews Neuroscience
,
2
,
635
642
.
Krauss
,
R. M.
, &
Hadar
,
U.
(
1999
).
The role of speech-related arm/hand gestures in word retrieval.
In R. Campbell & L. Messing (Eds.),
Gesture, speech, and sign
(pp.
93
116
).
Oxford
:
Oxford University Press
.
Laudanna
,
A.
,
Thornton
,
A. M.
,
Brown
,
G.
,
Burani
,
C.
, &
Marconi
,
L.
(
1995
).
Un corpus dell'italiano scritto contemporaneo dalla parte del ricevente.
In S. Bolasco, L. Lebart, & A. Salem (Eds.),
III Giornate internazionali di analisi statistica dei dati testuali
(pp.
103
109
).
Roma
:
Cisu
.
Levelt
,
W. J.
,
Richardson
,
G.
, &
La Heij
,
W.
(
1985
).
Pointing and voicing in deictic expressions.
Journal of Memory and Language
,
24
,
133
164
.
Malikovic
,
A.
,
Amunts
,
K.
,
Schleicher
,
A.
,
Mohlberg
,
H.
,
Eickhoff
,
S. B.
,
Wilms
,
M.
,
et al
(
2007
).
Cytoarchitectonic analysis of the human extrastriate cortex in the region of V5/MT+: A probabilistic, stereotaxic map of area hOc5.
Cerebral Cortex
,
17
,
562
574
.
McNeill
,
D.
(
1992
).
Hand and mind: What gestures reveal about thought.
Chicago
:
University of Chicago Press
.
Musso
,
M.
,
Moro
,
A.
,
Glauche
,
V.
,
Rijntjes
,
M.
,
Reichenbach
,
J.
,
Buchel
,
C.
,
et al
(
2003
).
Broca's area and the language instinct.
Nature Neuroscience
,
6
,
774
781
.
Nakamura
,
A.
,
Maess
,
B.
,
Knosche
,
T. R.
,
Gunter
,
T. C.
,
Bach
,
P.
, &
Friederici
,
A. D.
(
2004
).
Cooperation of different neuronal systems during hand sign recognition.
Neuroimage
,
23
,
25
34
.
Nichols
,
T.
,
Brett
,
M.
,
Andersson
,
J.
,
Wager
,
T.
, &
Poline
,
J. B.
(
2005
).
Valid conjunction inference with the minimum statistic.
Neuroimage
,
25
,
653
660
.
Oldfield
,
R. C.
(
1971
).
The assessment and analysis of handedness: The Edinburgh Inventory.
Neuropsychologia
,
9
,
97
113
.
Penny
,
W. D.
, &
Holmes
,
A. P.
(
2003
).
Random effects analysis.
In R. S. J. Frackowiak, K. J. Friston, C. D. Frith, R. Dolan, C. J. Price, S. Zeki, et al. (Eds.),
Human brain function
(pp.
843
850
).
San Diego, CA
:
Academic Press
.
Petitto
,
L. A.
(
1987
).
On the autonomy of language and gesture: Evidence from the acquisition of personal pronouns in American Sign Language.
Cognition
,
27
,
1
52
.
Ratcliff
,
R.
(
1993
).
Methods for dealing with reaction time outliers.
Psychological Bulletin
,
114
,
510
532
.
Saxe
,
R.
,
Carey
,
S.
, &
Kanwisher
,
N.
(
2004
).
Understanding other minds: Linking developmental psychology and functional neuroimaging.
Annual Review of Psychology
,
55
,
87
124
.
Schubotz
,
R. I.
, &
von Cramon
,
D. Y.
(
2004
).
Sequences of abstract nonbiological stimuli share ventral premotor cortex with action observation and imagery.
Journal of Neuroscience
,
24
,
5467
5474
.
Spreng
,
R. N.
,
Mar
,
R. A.
, &
Kim
,
A. S. N.
(
2009
).
The common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: A quantitative meta-analysis.
Journal of Cognitive Neuroscience
,
21
,
489
510
.
Straube
,
B.
,
Green
,
A.
,
Weis
,
S.
,
Chatterjee
,
A.
, &
Kircher
,
T.
(
2009
).
Memory effects of speech and gesture binding: Cortical and hippocampal activation in relation to subsequent memory performance.
Journal of Cognitive Neuroscience
,
21
,
821
836
.
Tettamanti
,
M.
,
Alkadhi
,
H.
,
Moro
,
A.
,
Perani
,
D.
,
Kollias
,
S.
, &
Weniger
,
D.
(
2002
).
Neural correlates for the acquisition of natural language syntax.
Neuroimage
,
17
,
700
709
.
Tettamanti
,
M.
,
Rotondi
,
I.
,
Perani
,
D.
,
Scotti
,
G.
,
Fazio
,
F.
,
Cappa
,
S. F.
,
et al
(
2009
).
Syntax without language: Neurobiological evidence for cross-domain syntactic computations.
Cortex
,
45
,
825
838
.
Tirassa
,
M.
(
1999
).
Communicative competence and the architecture of the mind/brain.
Brain and Language
,
68
,
419
441
.
Tomasello
,
M.
(
1999
).
The cultural origins of human cognition.
Harvard
:
Harvard University Press
.
Tomasello
,
M.
(
2008
).
Origins of human communication.
Cambridge, MA
:
MIT Press
.
Van Overwalle
,
F.
(
2009
).
Social cognition and the brain: A meta-analysis.
Human Brain Mapping
,
30
,
829
858
.
Van Overwalle
,
F.
, &
Baetens
,
K.
(
2009
).
Understanding others' actions and goals by mirror and mentalizing systems: A meta-analysis.
Neuroimage
,
48
,
564
584
.
Walter
,
H.
,
Adenzato
,
M.
,
Ciaramidaro
,
A.
,
Enrici
,
I.
,
Pia
,
L.
, &
Bara
,
B. G.
(
2004
).
Understanding intentions in social interaction: The role of the anterior paracingulate cortex.
Journal of Cognitive Neuroscience
,
16
,
1854
1863
.
Walter
,
H.
,
Ciaramidaro
,
A.
,
Adenzato
,
M.
,
Vasic
,
N.
,
Ardito
,
R. B.
,
Erk
,
S.
,
et al
(
2009
).
Dysfunction of the social brain in schizophrenia is modulated by intention type: An fMRI study.
Social Cognitive and Affective Neuroscience
,
4
,
166
176
.
Wang
,
A. T.
,
Lee
,
S. S.
,
Sigman
,
M.
, &
Dapretto
,
M.
(
2006
).
Developmental changes in the neural basis of interpreting communicative intent.
Social Cognitive and Affective Neuroscience
,
1
,
107
121
.
Willems
,
R. M.
,
de Boer
,
M.
,
de Ruiter
,
J. P.
,
Noordzij
,
M. L.
,
Hagoort
,
P.
, &
Toni
,
I.
(
2010
).
A dissociation between linguistic and communicative abilities in the human brain.
Psychological Science
,
21
,
8
14
.
Willems
,
R. M.
, &
Hagoort
,
P.
(
2007
).
Neural evidence for the interplay between language, gesture, and action: A review.
Brain and Language
,
101
,
278
289
.
Worsley
,
K.
,
Marrett
,
S.
,
Neelin
,
P.
,
Vandal
,
A.
,
Friston
,
K.
, &
Evans
,
A.
(
1996
).
A unified statistical approach for determining significant signals in images of cerebral activation.
Human Brain Mapping
,
4
,
58
73
.
Xu
,
J.
,
Gannon
,
P. J.
,
Emmorey
,
K.
,
Smith
,
J. F.
, &
Braun
,
A. R.
(
2009
).
Symbolic gestures and spoken language are processed by a common neural system.
Proceedings of the National Academy of Sciences, U.S.A.
,
106
,
20664
20669
.