The ability to recognize oneself in voluntary action is called the sense of agency and refers to the experience of causing one's own actions and their sensory consequences. This form of self-awareness is important not only for motor control but also for social interactions and the ascription of causal responsibility. Here, we examined the sense of agency at early and prereflective stages of action perception using ERPs. Subjects performed a visual forced-choice response task in which action effects were either caused by the subject or by the computer. In addition, to modulate the conscious experience of agency, action effects were subliminally primed by the presentation of congruent, incongruent, or neutral effect stimuli before the action. First, we observed sensorimotor attenuation in the visual ERP selectively for self-generated action effects. That is, the N1 component, a negative deflection around 100 msec after a visual stimulus, was smaller in amplitude for visual effects caused by the subject as compared with effects caused by the computer. Second, congruent effect priming enhanced the explicit judgment of agency and further reduced the N1 amplitude for self-generated effects, although effect primes were not consciously processed. Taken together, these results provide evidence of a top–down modulation of sensory processing of action effects by prior effect information and support the neurophysiological mechanism of sensorimotor attenuation as underlying self-registration in action. Our findings suggest that both efferent information and prior thoughts about the action consequence provide important cues for a prereflective form of the experience of being an agent.
How do we come to experience that we are causing our thoughts, actions, and even external events? The perceptual experience of voluntary actions comprises a sense of self in action, that is, a sense of causing and controlling the action and its perceptual consequences. If we think about turning on the light and flip the switch, we will automatically and indubitably feel that we ourselves caused the light to come on and not somebody or something else. This form of self-awareness is called the sense of agency, and it mostly remains prereflective, that is, in most actions we are not explicitly conscious of it (Gallagher, 2000). It can be disturbed in psychiatric patients, most typically in the case of schizophrenia, with delusions of control and symptoms of thought insertion. These patients interpret their own thoughts or actions as being controlled or influenced by someone else (Blakemore, Wolpert, & Frith, 2002). Despite an increasing body of research on the sense of agency (David, Newen, & Vogeley, 2008), the underlying neurocognitive mechanisms are not well understood, in part because studies have used different measures and targeted different levels of the sense of agency, meaning that findings cannot be related directly to each other.
A recent conceptual framework (Synofzik, Vosgerau, & Newen, 2008) distinguishes at least two important representational levels of the sense of agency: the feeling of agency (i.e., a sense of coherence in action processing) and the judgment of agency (i.e., a reflexive attribution of authorship), with different cues entering each level. The judgment of agency is thought to result from a higher-order reflective inference made on the basis of giving weight to cognitive indicators such as contextual cues and belief states. In contrast, the feeling of agency, being part of the “minimal” or “embodied” self (Jeannerod, 2007; Gallagher, 2000), is not based on conscious reflection but is assumed to depend on automatic processing of central and peripheral signals generated by the action itself.
Importantly, most studies so far have used explicit measures requiring reflective authorship attribution and, therefore, only captured the level of judgment of agency, for example, by asking participants to indicate whether they or the computer caused a visual effect (Aarts, Custers, & Wegner, 2005). These studies neglected an important aspect of agency, namely the nonconceptual immediate feeling of one's action, which can only be assessed by implicit measures. In fact, in our everyday lives, a nonreflective phenomenal experience is more common than an explicit representation of selfhood. Our sense of self persists even when we are not engaged in explicit reflection. Imagine the situation where you intend to cross a road. You will focus your attention on the coming cars but not on yourself, although you are conscious of yourself, albeit in a nonreflective manner. An implicit measure that has been proposed to capture this background experience of one's action is the attenuation of self-produced sensations (Blakemore, Wolpert, & Frith, 2000), and this was the focus of our experiment.
Sensory attenuation has been widely investigated by psychophysical studies exploring, for instance, the basis of why individuals cannot tickle themselves (Blakemore, Frith, & Wolpert, 1999; Weiskrantz, Elliott, & Darlington, 1971). This mechanism is considered to optimize motor control but also to facilitate the ability to differentiate sensations caused by oneself from those caused by other agents or external stimuli. Recent neuroscientific research has started to specify the underlying neural processes of this. Functional neuroimaging studies, for example, report less activation of the primary somatosensory cortex for self-generated as compared with externally generated tactile stimuli (Helmchen, Mohr, Erdmann, Binkofski, & Buchel, 2006; Blakemore, Wolpert, & Frith, 1998). Furthermore, research using EEG has shown reduced amplitudes of auditory ERPs following self-produced acoustic sensory input (Martikainen, Kaneko, & Hari, 2005; Houde, Nagarajan, Sekihara, & Merzenich, 2002; Curio, Neuloh, Numminen, Jousmaki, & Hari, 2000).
Because attenuation shows up in different sensory systems, it seems to rely on a general modality-independent mechanism. It has been suggested that sensory attenuation reflects a reduction in the perceptual prediction error depending on forward model predictions based on efference copy (Blakemore et al., 2000; Sperry, 1950; von Holst & Mittelstaedt, 1950). The cerebellum and parietal cortex have been shown to play a central role in the processing of prediction error (Blakemore & Sirigu, 2003; Wolpert, Miall, & Kawato, 1998; Miall, Weir, Wolpert, & Stein, 1993). Also precise spatio-temporal predictions derived from were motoric signals; however, recent studies revealed that, at higher cognitive levels, anticipation based on motor preparation (Voss, Ingram, Wolpert, & Haggard, 2008; Voss, Ingram, Haggard, & Wolpert, 2006) also contributes to this self-specific suppression effect.
Until now, existing models of the sense of agency focus exclusively either on its sensorimotor underpinnings (Frith, Blakemore, & Wolpert, 2000; Blakemore et al., 1999,15) or, in contrast, on higher-level inferences, which are unrelated to internal movement-related information (Wegner, 2002). Neither model, however, has yet looked at their interrelation. For example, the theory of apparent mental causation (Wegner, 2002, 2003) assigns a central role to cues that are independent of action execution such as thoughts and beliefs before the action or contextual information. It is assumed that these cues are used by a mental inference mechanism for generating a sense of agency. This view suggests that people think they have caused a light to turn on—even if they actually did not act at all—if they were thinking about it just before it happened and if there seem to be no alternative possible cause. Evidence supporting this theory has been provided by studies which used priming to manipulate thoughts about an action effect before the action was actually performed (Moore, Wegner, & Haggard, 2009; Sato, 2009; Linser & Goschke, 2007; Aarts et al., 2005; Wegner & Wheatley, 1999). The typical finding is that consistency between a prime and a subsequent action effect enhances the reported experience of agency even if the effect has in fact not been caused by the subject's action and is, therefore, independent of the motor system execution commands.
In contrast, the motor prediction model of agency, derived from theories on motor control (Wolpert, Ghahramani, & Jordan, 1995), claims that the sense of agency depends on predictions of an internal forward model, which are compared with input from sensory systems (Frith et al., 2000). In particular, the forward model receives a copy of the motor command (von Holst & Mittelstaedt, 1950) that is transformed into the expected sensory consequences resulting from the particular action. It is further assumed that a comparator mechanism then matches the predicted and actual sensory outcome: Congruency induces a sense of agency, whereas incongruence leads to the experience of external causation (Synofzik et al., 2008), that is, according to this view, the experience of having caused a light to turn on depends on the action of flipping the switch and the predictions based on the learning history of this action–effect coupling. Evidence for the comparator model has been provided by a number of behavioral and neurophysiological studies as well as patient studies (Voss et al., 2006; Sato & Yasuda, 2005; Ford & Mathalon, 2004; Shergill, Bays, Frith, & Wolpert, 2003; Feinberg & Guazzelli, 1999; Blakemore et al., 1998).
However, despite the evidence that different cues (e.g., motor prediction and prior thoughts) can contribute to the sense of agency, their specific contribution to the different levels of agency and their integration has not been studied yet. The present study sought to address this issue by focusing on sensory attenuation as a possible neural proxy (i.e., as a nonverbal measure of the feeling of agency) and to compare it with an explicit, verbal measure capturing the judgment of agency.
The specific aim of the present study was twofold. First, our purpose was to verify whether sensory attenuation can also be identified in the visual and not only in the auditory ERP and can be considered a possible implicit neural indicator of the sense of agency (cf. Gallagher, 2000). Second, and more importantly, we aimed to find out whether prior thoughts about the consequence of an action cannot only influence the reflective experience of control (i.e., the judgment of agency) but can also modulate early sensory processing in terms of enhancement or attenuation (i.e., the feeling of agency). For example, if one accidentally flips a light switch by leaning against a wall, the sudden illumination would probably be unexpected and one might not immediately consider oneself as being the cause. In contrast, if an individual anticipates the appearance of the light, for example, because of somebody else warning him or her, the sudden light would certainly be less unexpected and attention-getting and accompanied by an immediate feeling of causation. This modulation of attention might be mediated by the process of sensory attenuation, for example.
To investigate these issues, we recorded the EEG activity while participants viewed either self-generated or externally generated visual effects. Critically, we used masked priming to establish a subliminal thought about the action effect before the action. It has been shown that the illusion of conscious control over an action effect even occurs under conditions in which the prime is presented below the level of conscious awareness (Linser & Goschke, 2007; Aarts et al., 2005). In particular, we were interested in how prior thoughts can affect sensory attenuation of self-generated visual effects in early ERP components such as N1. Furthermore, participants gave estimates of the causal relation between their action and the effect as a measure of the judgment of agency. On the basis of theories of motor control, we expected a reduced magnitude in the N1 component as a reflection of sensory attenuation, specifically linked to self-generated effects. According to the inferential account, priming should affect the conscious judgment of agency (i.e., enhance causality estimates in cases in which prime and effect are congruent). Moreover, because the effect anticipation can be matched to the actual sensory input and thereby reduce the sensory prediction error, we predicted that a congruency between prime and subsequent effect should lead to further attenuation of the early visual ERP. In other words, we predicted that a cognitive agency indicator such as prior thoughts about a subsequent effect is integrated not only at the level of conscious judgments but already at the level of primary perceptual processing (i.e., the level of feeling of agency). Furthermore, according to the inferential account, the influence of prior thoughts is independent of efferent motor information, and therefore, the impact of priming should be present even if effects are not actively produced by a person.
Twenty-four right-handed subjects (12 women; mean age = 24 years, range = 19–31 years), with normal or corrected-to-normal vision participated in the experiment after providing written informed consent. The study was performed in accordance with the declaration of Helsinki.
Task and Procedure
In line with previous ERP research on sensory attenuation (Bäss, Jacobsen, & Schröger, 2008; Martikainen et al., 2005), the experiment consisted of three different tasks (see Figure 1). In the motor–effect (ME) task, subjects self-generated visual action effects, and in the effect-only (E) task, the same visual effects were externally generated. Differences in the effect-related ERP between the ME and E task would indicate self-specific processing of visual action consequences. A motor-only (M) task served as a control task to rule out motor activity as a possible confounding factor in the comparison between the ME and E task.
The ME task was a modified version of the agency paradigm described by Linser and Goschke (2007), in which subjects gave forced-choice left and right keypress responses, which triggered the appearance of one of two alleged effect stimuli. Participants were asked to pay attention to the relation between the choice of the response key and the type of effect stimulus. Stimuli were displayed in black on a gray background using Presentation software (Neurobehavioral Systems, Inc., Albany, CA). Each trial began with a 150-msec presentation of a forward mask, which was followed by a prime (40 msec) and a backward mask (20 msec).
The prime stimuli consisted of a set of three arrows pointing either upward or downward (subtending a visual angle of 0.3° × 1.6°). The mask was composed of upward and downward pointing arrows superimposed on each other, subtending an angle of 0.7° × 2.3°. Following the backward mask, one of two target stimuli (a circle or square) was randomly selected, presented for 50 msec (with a visual angle of 0.5° in width and height), and replaced by a blank screen, which remained until participants pressed the left or right key. The target–response mapping was counterbalanced across participants. After a 20-msec delay, responses were followed by either upward or downward pointing arrows (of the size of the prime stimuli), which were presented for 750 msec and followed by an ISI of 1500 msec. To create a context of agency ambiguity, the contingency between action and effect was lowered to 75%, a degree at which the influence of effect priming has been demonstrated in previous studies (Sato, 2009; Linser & Goschke, 2007), that is, the mapping of the target stimulus, which determined the action choice, and the effect stimulus was not consistent across all trials: In 75% of the trials, one particular target stimulus was related to one particular effect stimulus, whereas in the remaining 25% of trials, the opposite mapping appeared. This target–effect mapping was counterbalanced across participants.
After each block of 48 trials, the experience of control was assessed using a 10-cm visual analog scale. Subjects had to judge the degree to which they thought that their keypress (left or right) determined the pointing direction of the arrows on a scale ranging from 0% (no control) to 100% (full control). The critical factor was the relationship between prime and effect stimuli, which was either congruent (arrows pointed in the same direction), incongruent (arrows pointed in the opposite direction), or neutral (primes consisted of superimposed upward and downward arrows). The prime–effect relation was varied blockwise, and participants performed three blocks of 48 trials for each prime–effect condition.
The E task was an observation task in which stimuli were externally generated by the computer with identical timing as in the ME task. That is, subjects just passively viewed the same visual scenario. In place of the response window and the button press, a blank screen appeared for a duration of 470 msec, which was selected to mirror the mean RT in the ME task. The subsequent effect stimuli were presented with the same target–effect mapping (75:25) as in the ME task. Participants were again asked to pay attention to the causal relation between the target stimulus and the subsequent effect stimulus and had to judge the degree to which the target determined the pointing direction of the arrows on a visual analog scale ranging from 0 to 100 as in the ME task. The prime–effect relation was varied blockwise, as in the ME task, and participants performed three blocks of 48 trials for each condition.
In the M task, subjects had to respond to the target stimuli in the same manner as in the ME task, but no visual effect stimuli were presented. That is, responses were followed by a blank screen for 2250 msec until the next trial started. The order of the tasks was fixed across all participants: Six alternating blocks of ME and E tasks were followed by one block of the M task consisting of 48 trials and a 2-min break (ME–E–ME–E–ME–E–M break). This sequence of tasks was repeated three times.
Prime Awareness and Prime Processing
After the main experiment, participants performed two additional control tasks, which aimed to test whether prime stimuli were perceived although subjects were unaware of them. First, subjects performed a response priming task, which was used to measure prime perception. Subjects pressed the left or right key as quickly and as accurately as possible in response to one of two target stimuli. The stimulus material was the same as in the ME task except that the former effect stimuli served as targets (up or downward pointing arrows), and responses were no longer followed by an effect. The ISI was 1500 msec. The relation between prime and target stimulus was either congruent, incongruent, or neutral. Differences in RTs between any of these conditions would indicate that prime stimuli were perceived. The target–response mapping was counterbalanced across participants. Twelve practice trials were followed by a random sequence of 180 trials with 60 trials per condition.
Furthermore, at the end of the experiment, prime awareness was assessed using self-report (in a structured interview) as a subjective measure and a prime discrimination task as an objective measure. The latter task consisted of the same stimulus material as the ME task, except that targets were now replaced by a question mark (50-msec duration, 0.5° × 0.5°). Subjects were instructed to attend to the masked prime stimuli and to try to discriminate between them by pressing the left button for upward pointing arrows and the right button for downward pointing arrows and to guess if they did not recognize the pointing direction. The prime response mapping was the same as in the target discrimination task. Primes appeared in random order, and 160 trials were presented.
EEG Recording and Analysis
EEG was recorded from 64 scalp Ag–AgCl electrodes embedded in a fabric cap according to the international 10–20 system (BioSemi Active II System, BioSemi, Amsterdam, the Netherlands). EOG was recorded from electrodes placed external to the outer canthus of each eye and below and above the right eye. All channels were referenced to the left mastoid, and signals were amplified and digitized at 512 Hz.
Analysis of EEG data was performed using Brain Vision software (Brain Products GmbH, Gilching, Germany). EEG recordings were low-pass filtered at 30 Hz, high-pass filtered at 0.75 Hz, and rereferenced to average reference. Nonstereotyped muscular artifacts such as swallowing or temporary electrode artifacts were identified by visual inspection and rejected from further analysis. Repeatedly occurring, stereotyped artifacts such as eye movements or heartbeat were identified and removed using independent component analysis (Jung et al., 2000). This procedure led to a rejection of 1.4% of epochs, on average (mean = 13.97, range = 2–38). Two participants were excluded from the ERP analysis because of excessive movement-related artifacts. Subsequently, stimulus-locked data epochs were computed (−200 to 400 msec) and baseline-corrected using a 100-msec window before the response. Separate ERP average waveforms were then computed for each of the three tasks (ME, E, and M tasks) and for each of the three prime–effect conditions (congruent, incongruent, and neutral, separately for the ME and E tasks). Finally, grand mean ERPs were calculated by averaging each condition across participants. Only trials with correct responses (mean percentage = 99%) were included in grand mean average ERPs and statistical analyses.
Sensory attenuation effects in the auditory modality have been found to be most prominent around 100 msec following the effect stimulus over fronto-central brain regions (Bäss et al., 2008; Martikainen et al., 2005). The visual N1 has distinct subcomponents (Vogel & Luck, 2000), an early anterior subcomponent peaking around 100 msec after stimulus onset over fronto-central sites and a late posterior subcomponent peaking around 160 msec after stimulus onset over infero-posterior sites. In our analysis, we focused on the anterior N1 component and quantified its mean amplitude in the average ERP waveform of each experimental condition for each participant at nine electrode sites in the fronto-centro-parietal region (FC3, FCz, FC4, C3, Cz, C4, CP3, CPz, and CP4). Mean voltages were calculated in the 80- to 130-msec time interval after the onset of the effect stimuli. The measurement window comprises the latency of the anterior visual N1 component usually reported in the literature (Vogel & Luck, 2000). The impact of a possible oddball effect in the N1 time range was examined and excluded. Because no difference in N1 amplitude was observed for standard and deviant effect stimuli, both standards and deviants were included in the average ERP to increase the signal-to-noise ratio.
Mean ERP voltages were submitted to a four-way repeated measure ANOVA with the factors task (two levels: ME and E), prime–effect condition (three levels: congruent, incongruent, and neutral), anterior–posterior electrode location (three levels: fronto-central, central, and centro-parietal), and lateral scalp location (three levels: 3 = left; z = midline; 4 = right). If data violated the sphericity assumption, we applied the Greenhouse–Geisser correction. We used Tukey's HSD test as a multiple comparison post hoc procedure for further examination of differences and interactions between task, priming condition, and electrode location. All measured amplitude values were tested for normal distribution with the Kolmogorov–Smirnov test.
In the prime discrimination task, subjects' ability to discriminate prime stimuli was at chance level (one-sample t test with the chance level of performance set at 50%, t(19) = 1.14, p = .27), which indicates that subjects were not aware of the prime stimuli. Two participants who reported conscious prime detection were excluded, and further analyses were conducted with data from the remaining 20 subjects.
In the response priming task, we observed significant differences in RTs, F(2, 38) = 53.01, p < .001, and error rates, F(2, 38) = 9.04, p < .001, depending on prime–effect congruency. This indicates that, despite being unaware of the stimuli, participants' perception was influenced by the prime stimuli. RTs were faster in congruent trials (M = 403 msec, SEM = 49.72) as compared with incongruent trials (M = 452 msec, SEM = 44.39), p < .001, or neutral trials (M = 420 msec, SEM = 49.31), p < .01, and the error rate was lower for congruent trials (M = 4.4%, SEM = 3.19) as compared with incongruent trials (M = 10.6%, SEM = 7.02), p < .001.
Figure 2 displays mean explicit agency judgments for different prime–effect conditions and separately for the ME and E tasks.
The ANOVA with mean rating scores given in the ME task as the dependent variable yielded a significant main effect of Prime–Effect Congruency, F(2, 38) = 4.32, p < .05. A Tukey's HSD post hoc test revealed that participants reported a significantly stronger experience of control over the effect stimuli after blocks with congruent (M = 53.5, SEM = 2.48) as compared with incongruent prime–effect pairs (M = 43.2, SEM = 3.26), p < .05. No significant difference in control judgments was found between congruent and neutral trials (M = 48.9, SEM = 3.49, p > .15) and incongruent and neutral trials (p > .15). The prime–effect conditions were not associated with differences in mean RTs (p > .15) or error rates (p > .15); hence, response differences cannot account for the effect on perceived control.
In the E task, participants also provided causality judgments concerning the relation between target stimuli and effect stimuli. This judgment task was included to ensure comparable levels of involvement and attention between both the ME and E tasks. In contrast to the results obtained in the ME task, the ANOVA for the E task showed no significant difference in causality judgments between conditions of congruent (M = 46.3, SEM = 3.31), incongruent (M = 44.8, SEM = 3.67), and neutral priming (M = 49.6, SEM = 2.80, p > .15).
Anterior N1 Scalp Distribution
Figure 3A displays the grand average ERP waveforms recorded at fronto-central, central, and centro-parietal electrodes obtained during the E and ME tasks, separately for the three experimental prime–effect conditions. In Figure 3B, the scalp map of the difference wave between the E task and ME task is shown. An anterior N1 component is evident in the ERP waveform as a negative deflection peaking around 100 msec after stimulus onset, with a midcentral scalp distribution.
The ANOVA across both tasks revealed a main effect of Anterior–Posterior Electrode Location, F(2, 38) = 15.52, p < .001, which indicated that N1 was larger over fronto-central and central compared with centro-parietal brain regions. A significant interaction between Laterality and Anterior–Posterior Electrode Location was present, F(2, 38) = 4.74, p < .01. This interaction effect showed that at central and centro-parietal sites, the N1 amplitude was larger at midline than at lateral electrodes, all ps < .05, whereas at frontal sites, no difference between midline and lateral electrodes was present. Comparisons between left versus right hemisphere electrode positions did not reveal significant differences in N1 amplitudes. No differences in scalp distribution of the ERP were observed between the ME and E tasks.
Anterior N1 and Sensory Attenuation
From Figure 3A, it is clear that the N1 wave is larger for effect stimuli elicited in the E task than in the ME task. However, before computing and comparing N1 amplitudes, we first had to correct for component overlap in the ME task. In our task design, stimulus and response occurred simultaneously in the ME task, such that brain responses elicited by stimulus processing and motor activity were likely to overlap and distort the computed stimulus-locked ERP waveforms. The corrected ERPs in the ME task were obtained by subtraction of activity elicited by the M task, in which only motor responses and no effect stimuli occurred. Figure 3C demonstrates that motor-related activity did not modulate the N1 amplitude. By using the corrected ERPs in all subsequent analyses, the influence of motor activity as a confounding factor in the observed differences between the tasks could be ruled out.
The ANOVA across all prime–effect conditions yielded a significant main effect of Task, F(1, 19) = 6.05, p < .05, indicating smaller amplitudes in the ME task. That is, we observed a distinct attenuation of the visual N1 to self-generated relative to externally generated effects. Post hoc comparisons using the Tukey's procedure indicated a significant self-specific N1 attenuation in conditions of congruent effect priming, p < .01, as well as incongruent effect priming, p < .01, but not in conditions of neutral effect priming, p = .17. No interaction effects between Task and Electrode Position, laterality or anterior–posteriority, were observed. That is, the N1 attenuation effect did not differ in size between fronto-central, central, and centro-parietal sites nor did it differ between hemispheres.
Anterior N1 and Effect Priming
Results of an ANOVA across Tasks and Electrode Sites showed a main effect of Prime–Effect Congruency, F(2, 38) = 5.02, p < .05, indicating smaller amplitudes for congruent prime–effect conditions as compared with incongruent and neutral priming. Priming and Laterality interacted significantly, F(4, 76) = 4.03, p < .01, because of a larger effect of priming at midline compared with lateral electrodes. The interaction between Prime–Effect Congruency and Task did not reach significance, F(2, 38) = 2.34, p = .10.
To further investigate the differential effects of priming on N1 amplitudes, we computed separate ANOVAs for the ME and E tasks, including the factors Prime–Effect Condition, Anterior–Posterior Electrode Location, and Laterality. Figure 4 shows grand average ERP waveforms for the ME task and the E task at electrode Cz as a function of priming.
In the ME task, there was a significant main effect of Prime–Effect Congruency on N1 amplitude, F(2, 38) = 6.87, p < .01, indicating smaller amplitudes for congruent effect priming as compared with incongruent and neutral effect priming. Focusing on electrode site Cz, we conducted Tukey's HSD post hoc comparisons of amplitudes between priming conditions. These analyses demonstrated that N1 amplitudes were significantly smaller for the congruent priming condition as compared with incongruent or neutral effect priming, all ps < .05, without any significant difference between the latter two conditions (p > .15). Moreover, we observed a significant interaction between priming and laterality, F(4, 76) = 2.86, p < .05, indicating that the effect of priming was largest at midline electrodes compared with lateral sites.
For the E task, the ANOVA yielded no significant effect of Prime–Effect Congruency on N1 amplitude and no interaction effects between Priming and Electrode Position (ps > .15).
Thus far, our results demonstrate attenuation of ERP responses specifically to self-generated visual effects in the ME task. Further analyses showed that priming modulated the ERP response in the ME but not in the E task and most strongly at central electrode locations. To test whether priming had an impact on the self-specific attenuation effect, we computed an ANOVA with amplitudes of the difference waves between the ME and E tasks. The main effect of Priming did not, however, reach significance (p = .10) possibly because of low statistical power. Furthermore, no interaction effects between Prime–Effect Condition and Anterior–Posterior Electrode Position or Laterality were observed (ps > .15).
Self-specific Attenuation of Visual Effects: A Sensorimotor Mechanism Underlying the Sense of Agency
The present study aimed to measure sensory attenuation in the visual ERP as a sensorimotor and prereflective marker capturing the feeling of agency. The experiment focused on the N1 wave of the visual ERP. We found the N1 component to be smaller in response to visual effects that were caused by the subjects' actions as compared with the same effects that were externally caused and passively observed by the subjects. Thus, our results indicate specific sensory attenuation in processing of self-generated visual events. In humans, sensory attenuation has mainly been shown in the auditory but also in the tactile modality, both at the subjective perceptual level (Sato, 2009; Blakemore et al., 1999), and also at the neurophysiological level (Bäss et al., 2008; Martikainen et al., 2005; Houde et al., 2002; Curio et al., 2000). This is the first ERP study characterizing the time course of sensory attenuation in the visual modality using the advantage of the high temporal resolution of EEG.
It is important to note that the N1 component cannot be compared directly across modalities, because amplitude, latency, and topography differ as a function of the stimulus modality that is addressed during a given experimental protocol. In the auditory domain, for example, stimuli usually elicit larger N1 amplitudes with shorter latency than in the visual domain, in which the N1 component can be further subdivided into at least two distinct subcomponents (Vogel & Luck, 2000). Hence, task-dependent (self, other) modulation of the visual N1 amplitude is likely to be smaller than in studies using auditory stimuli. Indeed, in our experiment, the effect on the visual N1 amplitude was less pronounced than the suppression effects reported in the literature measuring the auditory N1 (Bäss et al., 2008; Heinks-Maldonado, Mathalon, Gray, & Ford, 2005; Ford & Mathalon, 2004). Future studies are needed, which directly compare attenuation effects across modalities.
According to theories of motor control, the availability of efferent information in the case of self-generated effects allows a forward model to make precise predictions about the upcoming action effect (Wolpert et al., 1995): A match between the predicted and actual effect leads to a cancellation of the afferent information (Blakemore et al., 1999). In contrast, no efferent information is available when the effect is externally generated. Hence, predictability of the effect is less precise and cannot be used for cancellation. These predictive processes enable the brain to already differentiate between external effects that the organism causes and those it does not cause at an early stage in sensory processing. Our study suggests that sensory attenuation is not only a mechanism to optimize motor control but that it also contributes to action perception, specifically to the attribution of action and thus the experience of agency.
It has been shown that the magnitude of the visual N1 component increases when attention is directed to the location of a stimulus, which suggests that spatial attention leads to selective amplification of sensory information flow in visual pathways (Hillyard, Vogel, & Luck, 1998; Mangun, 1995). In our study, we compared brain responses to identical visual events that were either passively observed or actively produced. It may be argued that differences in attentional processes between experimental conditions in terms of a higher level of general attention in the active response task (ME task) caused the modulation of N1 amplitudes. However, if this were the case, one would expect a N1 attenuation effect in the opposite direction, that is, an increased N1 component in the response task as compared with the observation condition (E task). To keep the degree of task and attentional involvement at a similar level, the subjects were required to perform causality judgments in both tasks. Thus, they directed their attention to the effect stimuli in either case by judging the causal relation between the effect stimulus and the preceding target or response, respectively. Because responses were always determined by the target stimulus, similar cognitive operations were involved in both judgment tasks. Furthermore, in a postexperiment questionnaire, subjects indicated that they did not perceive a difference in task difficulty between both conditions.
Nevertheless, although task involvement was comparable, it could be suggested that the mere presence of an action influenced the amount of attention available for the subsequent visual event, thereby leading to a N1 reduction, independent of any predictive mechanism. We do not believe, however, that our finding of self-specific N1 attenuation is simply because of a nonspecific reduction in attention because our second experimental manipulation concerning priming demonstrates that attenuation can arise from top–down expectations not associated with motor preparation and execution as indicated by the significant main effect of Prime–Effect Congruency. Taken together, this early perceptual discrimination of the self from the nonself as reflected in N1 attenuation to self-generated effects obviously informs a basic action representation, which provides the basis for the attribution of events to our own actions (i.e., the experience of agency).
Prior Effect Representations Contribute to the Judgment of Agency
The judgment of agency has been assumed to depend on effect anticipations and their respective consistency with the actual effect (cf. Wegner, 2002). In support of this, we found that agency judgments were enhanced when a prime stimulus before the action and the action effect were congruent as compared with when they were incongruent or unrelated, which is consistent with previous studies (Sato, 2009; Linser & Goschke, 2007; Aarts et al., 2005). We further showed that this modulation of the sense of agency occurred although prior effect information remained at an unconscious level and the action of the subject was predetermined. These findings suggest that processes underlying the experience of agency do not necessarily become conscious. Environmental cues can be used to establish a sense of agency without entering our conscious awareness.
It seems counterintuitive that subjects experienced a sense of agency in a situation in which they could not freely choose the action. Importantly, we here investigated the agency experience for external events (and not for actions per se), which should not depend on the degree of freedom of actions and choice. For example, if a sound occurs after one is forced to press the light switch, the fact that the action was not chosen by oneself has no informative value concerning the causal relation between the button press and the sound. On the other hand, it has been argued that the formation of action–effect associations (i.e., ideomotor learning) can occur only in the case of voluntary actions (Herwig, Prinz, & Waszak, 2007), with diverging findings however (Elsner & Hommel, 2004). In line with the ideomotor principle, in the response priming task, which served as our control task, we showed that the former action effects, which served as target stimuli, influenced both speed and accuracy of action selection despite the fact that actions were previously exogenously driven. Moreover, as Wegner and colleagues demonstrated, priming of action effects induces a sense of agency even in the absence of an action or in situations in which the event is completely uncontrollable (Wegner, Sparrow, & Winerman, 2004; Wegner & Wheatley, 1999). That is, action effect primes can mimic voluntary actions in that they activate a representation of the action consequence before the action which can then be used to infer agency.
Our results further show that priming did not influence the perception of causality between two external sensory events which were unrelated to any action in a passive observation task. Interestingly, the sensory input and the actual causal relation did not differ between the active response (ME) task and the observation (E) task: In both tasks, the effect stimuli were determined to the same degree (75%) by the preceding target stimuli. The two tasks differed only in terms of whether the appearance of the effect stimulus depended on an action at all or was externally controlled by the computer. Indeed, some subjects even directly focused on the relationship between effect and preceding target in the response task, as they were aware of the fact that responses could not be chosen freely but were determined by the target. Despite these similarities between both tasks, our findings indicate that unconscious representations of upcoming external effects only then influence causality judgments when the effects are produced by an action. That is, although it has been argued that an individual's prior thoughts can also induce feelings of agency for effects generated by others even when the individual was not active (Wegner et al., 2004), this does not seem to be transferable to the perception of causality for events in which no actor is involved at all.
The perception of causality in general relies upon observation of spatio-temporal correlations between at least two events to ascribe cause and effect. Agency, as a special case of causality, implies there is an actor, namely, the perceiving subject him or herself, as the cause of an external event. To ascribe cause and effect in the case of agency, the subject's action itself is the focus of evaluation and has to be related to the observed effect in terms of spatio-temporal correlations. Prior representations of upcoming action effects are essential for action selection, on the one hand, but also for action monitoring, on the other, by enabling a comparison between the goal of the action and the actual action effect. It is thought that priming influences these prior representations of action consequences (Aarts et al., 2005) and, thereby, modulates the outcome of the action–effect evaluation. In contrast, the content of prior effect representations has not been assigned a central role in the context of perceived causality between external events.
Prior Effect Representations Modulate the Feeling of Agency
Prior effect representations are anticipatory or intentional states that serve as cognitive cues to the judgment of agency. However, their impact on the sensorimotor level of feeling of agency is still unknown. We expected that, analogous to the impact of efference-based predictions, prime-induced effect anticipations would also influence the N1 amplitude. Indeed, our results showed that the N1 amplitude following self-generated action effects was modulated by the congruency between prime and effect. We observed reduced N1 amplitudes when the relation between prime and effect was congruent as compared with an incongruent or neutral prime stimulus. These findings are generally in line with a previous neuroimaging study investigating the effect of unconscious semantic priming on brain responses to subsequent visible target words (Dehaene et al., 2001). This study reported reduced brain activation to target stimuli in extrastriate, fusiform, and precentral regions depending on the congruency between unconscious prime stimuli and target stimuli.
Furthermore, it has been demonstrated that this stimulus-specific repetition suppression phenomenon is the consequence of top–down expectations rather than automatic bottom–up perceptual repetition effects (Summerfield, Trittschuh, Monti, Mesulam, & Egner, 2008). Thus, our results indicate that effect anticipation as an important cognitive agency indicator at the level of explicit agency judgments also serves as a cue for the sensorimotor representation of agency (i.e., the feeling of agency). It is important to note, however, that there was no significant impact of priming on the N1 amplitude difference between self and externally generated action effects, although priming exclusively affected the N1 amplitude following self-generated effects but not the N1 following externally generated effects. The lack of a statistically significant influence is probably because of low statistical power, which increases the possibility of a type II error. Alternatively, prior information (induced by priming) might possibly also have a general effect on the processing of sensory events regardless of the source of the event. However, this might have been too weak to manifest itself statistically in the context of the present experimental paradigm.
Further studies are needed which extend the present paradigm to other sensory modalities. Studies that investigated sensory attenuation in the auditory modality, for example, have reported larger and obviously more robust effects (Bäss et al., 2008; Heinks-Maldonado et al., 2005; Martikainen et al., 2005; Ford & Mathalon, 2004). On the other hand, effect anticipation induced by priming is only one of many different agency cues that are combined to form a robust agency representation. Hence, despite the obvious impact on measures of agency, its influence is limited and complemented by other factors such as proprioceptive and motor signals as well as contextual cues and self-concept. According to recent views (Synofzik, Vosgerau, & Lindner, 2009; Synofzik et al., 2008), the sense of agency depends on an optimal integration and combination of a wide variety of internal and external cues at different representational levels.
Our study supports this perspective and demonstrates a new approach to the sense of agency by targeting different levels of agency processing within one experimental paradigm using implicit and explicit measures at the same time. However, the interaction of the mechanisms underlying these implicit and explicit measures (i.e., the relation between prereflective and reflective aspects of the sense of agency) still needs to be explored. On the basis of observed dissociations between action awareness and automatic action control, it has, for instance, even been argued that the conscious experience of action cannot depend on endogenous signals used for motor control because they are poorly accessible to consciousness (Fourneret & Jeannerod, 1998; Georgieff & Jeannerod, 1998). However, in the present study, we demonstrated that subliminal information about an action outcome can have an impact not only on the conscious experience of action but also on automatic, unconscious processes of motor control and sensory gating. To do justice to the complex phenomenology of agency, future studies are needed to further explore the relative weighting by which different conscious and unconscious cognitive, sensory, and motor-related signals are integrated at different perceptual stages.
In conclusion, we here show that both subliminal effect anticipation induced by priming and efference-based prediction seem to have similar effects at early stages of sensory processing of an action consequence. Whereas the comparator and inference models emphasize distinct agency indicators, with these being motor-related information and prior thoughts, respectively, both seem to influence the sense of agency not only at the level of conscious judgments but also at the level of nonconceptual feeling. That is, the comparator mechanism which matches expected and actual action consequences can be fed by different agency cues derived from internal or external sources, and the outcome of this comparison is already reflected at the level of immediate perceptual processing. The resulting attenuation of brain responses to sensory input in the case of agreement between expectation and actual state is accompanied by a feeling of action completion and control and, thereby, contributes to the conscious experience of being the agent.
This work was supported by the Max Planck Society and the Berlin School of Mind and Brain. We would like to thank Jan Bergmann and Jeanine Auerswald for technical assistance and help with data collection and Rosie Wallis for editing this article.
Reprint requests should be sent to Antje Gentsch, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, D-04103 Leipzig, Germany, or via e-mail: email@example.com.