One of the functions of the brain is to predict sensory consequences of our own actions. In auditory processing, self-initiated sounds evoke a smaller brain response than passive sound exposure of the same sound sequence. Previous work suggests that this response attenuation reflects a predictive mechanism to differentiate the sensory consequences of one's own actions from other sensory input, which seems to form the basis for the sense of agency (recognizing oneself as the agent of the movement). This study addresses the question whether attenuation of brain responses to self-initiated sounds can be explained by brain activity involved in movement planning rather than movement execution. We recorded ERPs in response to sounds initiated by button presses. In one condition, participants moved a finger to press the button voluntarily, whereas in another condition, we initiated a similar, but involuntary, finger movement by stimulating the corresponding region of the primary motor cortex with TMS. For involuntary movements, no movement intention (and no feeling of agency) could be formed; thus, no motor plans were available to the forward model. A portion of the brain response evoked by the sounds, the N1-P2 complex, was reduced in amplitude following voluntary, self-initiated movements, but not following movements initiated by motor cortex stimulation. Our findings demonstrate that movement intention and the corresponding feeling of agency determine sensory attenuation of brain responses to self-initiated sounds. The present results support the assumptions of a predictive internal forward model account operating before primary motor cortex activation.
Stimuli caused by our own actions receive a special treatment in the brain. This claim is supported by the finding that self-generated stimuli are perceived to be less intense than other externally generated stimuli (Blakemore, Wolpert, & Frith, 1998). Models of motor control suggest that these sensory attenuation effects indicate the successful prediction of the sensory consequences of our motor acts (Wolpert, Ghahramani, & Jordan, 1995). Specifically, those models assume that, whenever an action is performed, copies of our motor commands are routed as corollary discharges (CD) to sensory structures, and the sensory consequences resulting from the action are predicted via forward modeling. The comparator model proposes that predicted and received sensory feedback is then compared, leading to sensory attenuation in case of a match (Tsakiris & Haggard, 2005). This comparison has also been proposed as the basis for the sense of agency (Blakemore, Wolpert, & Frith, 2002; Frith, Blakemore, & Wolpert, 2000), because it enables differentiating the sensory consequences of one's own actions from other sensory input.
However, the precise neural implementation of the comparison process is unknown. Animal neurophysiology studies have established that CD circuits originate in all levels of the motor pathway and can influence the sensory processing stream at different levels in various sensory systems (Crapse & Sommer, 2008). In humans, research has been mostly focused on the somatosensory modality, that is, on the processing of voluntary movements and their direct proprioceptive and tactile consequences. These studies provide converging evidence that CD signals originate upstream from the execution of the motor command in primary motor cortex (Christensen et al., 2007; Voss, Bays, Rothwell, & Wolpert, 2007; Haggard & Whitford, 2004). Thus, when body movements are involuntary, no sensory attenuation occurs (Haggard & Whitford, 2004; Chronicle & Glover, 2003). A similar picture emerges for the sense of agency, which seems to be driven by a match between experienced motor intentions, formed in premotor and parietal cortex, and the achieved goals (Haggard, 2005). Thus, studies focusing on voluntary movements and their proprioceptive feedback indicate that the CD signals necessary to recognize oneself as the agent of the movement and for the movement's feedback to be processed as self-generated are issued during movement planning, rather than upon movement execution.
Proposing a universal predictive mechanism for sensory processing of voluntary movements (Wolpert et al., 1995), the same CD circuits might be involved in the processing of self-generated auditory stimuli. Several studies have shown that auditory stimuli self-generated via instrumental action (i.e., sounds that are self-initiated via button press) elicit an attenuated N1-P2 complex in the auditory ERP compared with passive sound exposure (Timm, SanMiguel, Saupe, & Schröger, 2013; Knolle, Schröger, Baess, & Kotz, 2012; Baess, Horváth, Jacobsen, & Schröger, 2011; Aliu, Houde, & Nagarajan, 2009; Bäss, Jacobsen, & Schröger, 2008; Martikainen, Kaneko, & Hari, 2005; Schafer & Marcus, 1973). The attenuation of the N1-P2 complex might reflect a match in the comparator and is also used as an indicator for agency disruptions (Ford, Gray, Faustman, Roach, & Mathalon, 2007). However, the presumption that the N1-P2 attenuation reflects predictive processing is still controversial (SanMiguel, Todd, & Schröger, 2013; Synofzik, Vosgerau, & Newen, 2008; Tsakiris & Haggard, 2005). For example, recent findings show that auditory input seems to be attenuated for a short period after the motor act, even if there is no contingency between button press and sound (Horváth, 2013a, 2013b; Horváth, Maess, Baess, & Tóth, 2012). Moreover, only little is known about the specific relationship between N1-P2 attenuation to self-initiated sounds and the sense of agency (Gentsch & Schütz-Bosbach, 2011; Kühn et al., 2011). Thus, this study aims to shed further light on the underlying processes engaged in the processing of self-initiated sounds and the N1-P2 attenuation.
To this end, we use EEG to record ERPs from the human scalp in response to a sound initiated by a button press. Participants either move a finger to press the button or a similar finger movement is initiated by stimulating primary motor cortex with TMS. Thus, both voluntary and involuntary finger movements are the result of activity in the participant's motor cortex. However, TMS-evoked finger movements cannot be planned by the participant, that is, the intention to move and the corresponding feeling of agency is missing. Assuming that CD signals are sent during movement planning rather than movement execution (Haggard & Whitford, 2004; Chronicle & Glover, 2003), no CD should be available to the predictive forward model for the TMS-evoked finger movements. We expect to find an attenuated N1-P2 complex only in response to the voluntary finger movements, but not in response to the TMS-evoked movements. Thus, our study can answer the question of whether the forward model account of the N1-P2 attenuation to self-initiated sounds is appropriate.
Twenty-four healthy right-handed volunteers were recruited for the experiment. Seven participants were excluded for technical reasons (six because the TMS artifact could not be corrected and one because of a low signal-to-noise ratio of the EEG recording). The mean age of the remaining 17 participants was 24.06 years (range = 18–31 years). All participants reported normal hearing and normal or corrected-to-normal vision, had no history of hearing disorder or neurological disease, and took no medication affecting the CNS. The experimental procedures conformed to the World Medical Association's Declaration of Helsinki and were approved by the local ethics committee. All participants provided informed consent and were compensated for their participation.
During EEG recordings, participants were seated comfortably and were instructed to move as little as possible during the experiment. They were also instructed to fixate their gaze on a gray cross displayed on a black computer screen to reduce eye movements. Stimulus generation and acquisition of behavioural responses were controlled by a computer using MATLAB (The MathWorks, www.mathworks.com) and the Cogent 2000 toolbox (www.vislab.ucl.ac.uk/cogent_2000.php). Auditory stimuli were sine tones with a frequency of 1 kHz and a duration of 50 msec (including 10 msec square-cosine onset and offset ramps). Sounds were presented through ER1 insert earphones (Etymotic Research, www.etymotic.com). The intensity of the sounds was adjusted to a comfortable loudness by the participant before the experiment.
The experiment consisted of two main conditions (“voluntary” and “involuntary”) and four control conditions. All conditions involved EEG recording, and some conditions involved TMS (see respective sections below). In the voluntary condition, participants were instructed to press a piezoresistive force sensor (“button”), connected to an Arduino microcontroller board (www.arduino.cc), with their right index and middle fingers in a self-paced interval of 2.5–4.5 sec (mean = 3.5 sec). Each press initiated sound presentation after a 100-msec delay, inserted to avoid overlapping of the TMS artifact, and the sound-evoked responses in the EEG recordings (see detailed explanation below). In the involuntary condition, we applied a single TMS pulse (see below) to the left primary motor cortex that elicited an involuntary finger movement of the participants, leading to a button press every 2.5–4.5 sec (mean = 3.5 sec), which in turn elicited a sound 100 msec later. The TMS-induced movements were similar but of course not identical to the voluntary movements. The main difference between voluntary and involuntary movements is that voluntary movements involve brain activity in premotor areas engaged in movement planning. In contrast, involuntary movements should not involve premotor activity because TMS induces a short-lasting excitation in primary motor cortex, which in turn activates corticospinal neurons and leads to a muscle contraction (Hallett, 2007). Thus, predictive sensorimotor signals of sensory consequences of movements should only be available for voluntary, but not for involuntary movements. Consequently, sensory attenuation effects should only be observable for voluntary movements. However, in both voluntary and involuntary conditions, sounds were always initiated by the press of the participants' finger on the button. In both conditions, the experimenter was present in the laboratory. In the involuntary conditions, the experimenter adjusted the position of the TMS coil. In the voluntary conditions, the experimenter silently supervised the experiment in the background.
It is well known that each TMS pulse induces an ERP, which mainly affects local cortical activity in the primary motor cortex (Siebner & Ziemann, 2007). Moreover, the abrupt electromagnetic forces in the stimulating coil produce a short click every time a single TMS pulse is delivered (Counter & Borg, 1992), which evokes auditory responses in the EEG. It has been shown that the TMS coil click can affect processing of simultaneously presented auditory stimuli (Tiitinen et al., 1999). We controlled for this possible confound by introducing an artificial temporal delay of 100 msec between button presses and sound presentation. In both voluntary and involuntary conditions, the temporal delay between button press and onset of self-initiated sound was identical. In this study, TMS pulses to primary motor cortex elicited finger movements with a latency of 60–110 msec (mean latency = 85.7 msec, SD = 24.38 msec); thus, after inserting the additional 100 msec, the temporal delay between TMS pulses and the onset of self-initiated sounds, elicited through the button press of the participants' finger, was around 185 msec.
To quantify attenuation of brain responses to sounds elicited by button presses relative to passive exposure to the same sounds, we added an “auditory-only” control to both the voluntary and involuntary conditions, in which we measured EEG responses to the sounds alone, without preceding finger movements. This was achieved by playing back the auditory stimuli of the active conditions to the passively listening participants. In the involuntary auditory-only condition, the exact sequence of TMS pulses and sounds was replayed, but we tilted the TMS coil by 90°, which does not result in motor cortex stimulation. Sensory attenuation effects were estimated by comparing self-initiated sounds to passively heard sounds. Hence, the presence of TMS pulses in the involuntary auditory-only condition allowed controlling for any possible influences of TMS clicks on the processing of the subsequent auditory stimuli in this comparison. Sensory attenuation effects to self-initiated sounds were only identified within conditions, that is, differences between sound-evoked responses to self-initiated sounds and sounds that are played back passively were analyzed separately for the voluntary and involuntary conditions (see below). Consequently, inserting TMS pulses in the voluntary condition was not necessary to control for this issue.
To account for motor activity in the EEG recordings, we added a further “motor-only” control to both the voluntary and involuntary conditions. In the motor-only voluntary condition, participants pressed the button in the same self-paced interval as in the voluntary condition, but no sounds were played. In the motor-only involuntary condition, TMS pulses were applied to elicit button presses every 2.5–4.5 sec (mean = 3.5 sec), but again, no sounds were played.
Each of the six conditions was presented in four blocks of 45 trials (180 trials per condition). With 1080 trials (6 conditions × 180 trials) at an average duration of 3.5 sec, the experiment took approximately 1 hr, excluding participant preparation and breaks. Blocks for voluntary and involuntary conditions were always followed by the respective auditory-only and motor-only blocks. Apart from that constraint, the order of the voluntary and involuntary conditions was counterbalanced across participants.
Before the main experiment, participants performed a short training session of the voluntary condition and the motor-only voluntary condition to get accustomed to the procedures and to improve their ability to produce button presses within intervals of 2.5–4.5 sec. After each press during training, participants were shown the time elapsed since the previous button press. At the end of each training block (20 trials), participants were shown the number of produced intervals that were above and below the required range. Furthermore, participants were accustomed to the involuntary condition to get familiar with the TMS procedure. While applying a single TMS pulse to the left primary motor cortex to elicit an involuntary finger movement, participants were instructed to relax their right hand and to fixate the cross on the screen (Figure 1).
TMS was applied with a Rapid2 system with a hand-held 70-mm figure-eight coil (Magstim, www.magstim.com). A Brainsight 2 neuronavigation system (Rogue Research, www.rogue-research.com) was used to aid localizing and verifying the TMS target position. We registered an MRI of a template head to the head of each participant. The neuronavigation system tracked the relative positions of the TMS coil and the participant's head during the experiment and displayed anatomical locations on the template brain corresponding to the current coil position. The approximate location of the left primary motor cortex was identified on the template brain. The position of the coil was then adjusted so that a TMS pulse produced a motor potential in the right first dorsal interosseus muscle. This muscle flexes the index finger and is involved in the voluntary finger movement that participants executed when pressing the button. Muscle activity was measured with an EMG system integrated with the TMS apparatus. The intensity of the TMS stimulation during the experiment was set to 110% of the smallest intensity that produced a motor potential and a visible finger movement. A trigger was generated whenever the force measured by the pad deviated by a set amount from the reference value, which was defined as the weight of the relaxed finger on the pad and was constant across conditions. Significant movements that led to button presses were elicited in 81% (SD = 14.27%) of involuntary trials. The 19% of failed trials can be explained by two reasons: The experimenter either missed the spot in primary motor cortex so that no finger movement was elicited or the movement that was generated was not large enough. Participants were instructed to keep their hand relaxed during TMS stimulation to avoid possible corrections of button presses that were too soft.
EEG activity was recorded continuously throughout the experiment with a SynAmps2 amplifier (Neuroscan, www.neuroscan.com) and TMS-compatible sintered Ag/AgCl electrodes from 64 positions on the scalp, including the left and right mastoid (M1, M2). In addition, a ground electrode was placed on the head, and a reference electrode was placed on the tip of the nose. Eye movements were monitored with bipolar recordings from electrodes placed above and below the left eye (vertical EOG) and lateral to the outer canthi of both eyes (horizontal EOG). EEG and EOG signals were sampled at 2000 Hz.
Epochs of 3-sec duration, starting 1.5 sec before the onset of the sound stimuli, were extracted from the raw EEG data. A linear trend was removed from each epoch, and power line noise was removed by rejecting the 60 Hz bin from the epoch's spectrum using a discrete Fourier transform. Electrical artifacts caused by the TMS pulses were removed from the EEG data using spline interpolation as described by Thut and colleagues (2011). Epochs were resampled at 512 Hz. We applied a second-order two-way 1-Hz Butterworth high-pass filter and a 16th-order two-way 25-Hz Butterworth low-pass filter to the epochs. The data were visually inspected, and epochs with excessive EOG, movement, or other artifacts were removed. Epochs containing button presses outside the required interval range (see above) were also removed. Epochs were then shortened to 600 msec duration, starting 300 msec before the onset of the sound stimulus. Epochs were averaged separately for each experimental condition and participant.
To isolate sound-evoked brain activity from motoric activity associated with the finger movements, we subtracted the respective motor-only conditions from the voluntary and involuntary conditions. The resulting responses were then compared with responses in the respective auditory-only conditions. In this comparison, we focused on the amplitudes of the N1 and P2 components of the evoked response. We defined the amplitude of the N1 component as the minimum of the response waveform in a latency window of 70–140 msec after sound onset and the amplitude of the P2 component as the maximum of the response waveform in a latency window of 135–265 msec after sound onset. We subtracted N1 and P2 amplitudes (“peak-to-peak amplitude”) and performed a repeated measurement ANOVA with the factors Agency (voluntary vs. involuntary) and Task (active vs. passive) on the mean peak-to-peak amplitude of the frontocentral electrodes F3, Fz, F4, FCz, FC3, FC4, Cz, C3, and C4. Post hoc tests were conducted to clarify the origin of significant interactions. Greenhouse–Geisser correction was applied when appropriate. We used peak-to-peak analysis to minimize potential influences of the TMS artifact and to increase signal-to-noise ratio compared with a single component analysis. The downside of this procedure is that it is not possible to dissociate attenuation effects on the N1 and P2 components. Although some studies have found differentiated attenuation effects on these two components (e.g., Knolle et al., 2012; Sowman, Kuusik, & Johnson, 2012), effects on the N1 and P2 in common attenuation paradigms mostly go along with each other (e.g., Horváth et al., 2012; Schafer & Marcus, 1973).
In Figure 2, original grand-averaged ERP waveforms at electrode Cz elicited by passive sound exposure (auditory-only) and self-initiated sounds (motor-auditory) as well as motor activity (motor-only) are shown separately for the voluntary (Figure 2A) and involuntary conditions (Figure 2B). The ERP waveform in response to the self-initiated sounds shows a negative deflection in the typical N1 latency range and a positive deflection in the typical P2 latency range.
For further analysis, evoked responses to passive sound exposure will be compared with evoked responses to motor-corrected self-initiated sounds within each condition (see Figure 3A and B). The analysis revealed a significant difference between the sound-evoked responses in the voluntary condition (in which participants initiate a finger movement to press a button) and the involuntary condition (in which the movement is initiated by TMS; significant main effect of Agency on peak-to-peak amplitude of the N1-P2 complex, F(1, 16) = 21.90, p < .001). Furthermore, no differences between the sound-evoked responses were observed in the active condition (in which the sound was initiated by the participants button press) and the passive condition (in which sounds were played back passively; no significant main effect of Task on peak-to-peak amplitude of the N1-P2 complex, F(1, 16) = 0.52, ns). However, a significant interaction of Agency and Task was found (F(1, 16) = 7.53, p = .014). Post hoc tests revealed stronger response attenuation, that is, smaller peak-to-peak amplitudes for self-initiated sounds than passive sound exposure, in the voluntary condition (see Figure 3A, top) than in the involuntary condition (see Figure 3B, top, t(16) = 2.28, p = .037). This is also apparent in the topographical scalp distributions of the separate N1 and P2 components of each condition (see Figure 3A and B, bottom). For passive sound exposure in both the voluntary and involuntary conditions, the N1 component shows a typical negative-going, frontocentral scalp distribution and the P2 component shows a typical positive-going, somewhat more central distribution. However, in the voluntary condition, a clear modulation of the N1 and P2 components is observable for self-initiated sounds. This self-initiation effect is reflected in the difference wave (passive-minus-active, see Figure 3A, bottom). In contrast, in the involuntary condition, the N1 and P2 components do not show a modulation for self-initiated sounds. The absence of the self-initiation effect in the involuntary condition is also supported by the difference wave (see Figure 3B, bottom).
This study aimed to determine whether attenuation of brain responses to self-initiated sounds can be explained by brain activity involved in movement planning rather than movement execution. We recorded ERPs in response to a sound initiated by a button press. Sounds were initiated either by voluntary finger movements made by the participants or by similar, but involuntary, movements induced by stimulating primary motor cortex with TMS. We hypothesized that CD signals involved in the processing of self-initiated sounds are sent during movement planning, rather than movement execution. Thus, an attenuation of the sound-evoked N1-P2 complex was expected only for voluntary movements, but not for involuntary movements, because no CD signals should be available to the predictive forward model during involuntary movements.
As expected, our results revealed an attenuated auditory N1-P2 complex to self-initiated sounds following voluntary finger movements. This finding strengthens previous electrophysiological research investigating self-initiation effects in the auditory modality (Timm et al., 2013; Knolle et al., 2012; Baess et al., 2011; Aliu et al., 2009; Bäss et al., 2008; Martikainen et al., 2005; Schafer & Marcus, 1973). Furthermore, our results are in line with behavioral findings showing sensory attenuation to self-initiated sounds (Desantis, Weiss, Schütz-Bosbach, & Waszak, 2012; Weiss, Herwig, & Schütz-Bosbach, 2011a, 2011b; Sato, 2008, 2009). Our main experimental manipulation showed that, if the finger movement that initiated the sound was caused by motor cortex stimulation, no attenuation of the N1-P2 complex to self-initiated sounds was detectable. That is, the auditory self-initiation effect was abolished when the movement was not planned by the participants. These results demonstrate that the intention to move determines sensory attenuation of self-initiated sounds and that activity in primary motor cortex is insufficient to drive the attenuation. Thus, we provide direct evidence that the CD circuits that are engaged in the processing of self-initiated sounds do not involve the primary motor cortex where the motor command is executed. Our results are in agreement with previous studies in the somatosensory modality (Christensen et al., 2007; Voss et al., 2007; Haggard & Whitford, 2004) that found no sensory attenuation for involuntary body movements, irrespective of whether movements were artificially induced via peripheral (muscle) or central (single pulse TMS to motor cortex) stimulation. Moreover, it has been shown that self-generation effects such as sensory attenuation are disrupted when repetitive TMS is applied over areas before motor cortex (Moore, Ruge, Wenke, Rothwell, & Haggard, 2010; Haggard & Whitford, 2004; Haggard & Magno, 1999). Conversely, there is some evidence that motor planning (Voss, Ingram, Haggard, & Wolpert, 2006) and anticipated movement (Voss, Ingram, Wolpert, & Haggard, 2008), without actual movement execution, may lead to sensory attenuation effects. Our findings show that the same mechanism seems to hold in the auditory modality and thus support the notion of an universal predictive mechanism for sensory processing of voluntary movements that operates before the activation of the primary motor cortex (Crapse & Sommer, 2008; Wolpert et al., 1995). However, because TMS-induced movements were not fully identical to voluntary movements in this study, it cannot be ruled out entirely that the observed effects might be affected by differences between TMS-induced movements and voluntary movements. For example, it might be possible that the observed effects are because of the fact that TMS-induced movements are less specific and more transient than voluntary movements, and therefore, no sensory attenuation effects are observed for sounds initiated by TMS-induced involuntary movements.
There exist converging evidence that the experience of conscious motor intention and the associated sense of agency mainly arises from motor preparation in premotor and parietal cortex (Haggard, 2005). This hypothesis is supported by findings showing that cortical electrical stimulation of parietal brain regions can generate feelings of intending to move and even the conviction of having executed the movement (Desmurget et al., 2009). In line with this, Desmurget and Sirigu (2009) proposed a parietal-premotor network for movement intention, suggesting that CD signals are emitted through forward modeling within the parietal cortex and that these signals are the basis of motor awareness. In agreement with this proposal, our findings provide evidence for a direct relationship between the N1-P2 attenuation effect for self-initiated sounds and the sense of agency. We reported an attenuated N1-P2 complex only for intended movements, that is, when participants experienced agency. Thus, the N1-P2 attenuation effect seems to reflect a sense of self in action, which allows us to recognize whether an external event was linked to our own movement or not. Our results support previous studies interpreting a lack of N1-P2 attenuation as an indicator of agency disruptions (Gentsch & Schütz-Bosbach, 2011; Kühn et al., 2011).
Importantly, our results contradict previous nonpredictive accounts of attenuation of self-generated sensory events (Horváth, 2013a, 2013b; Horváth et al., 2012; Synofzik et al., 2008; Tsakiris & Haggard, 2005). Those models propose that at least a part of the sensory attenuation effect may be the basis for the initial formation of contingent associations between motor and sensory events. Thus, sensory attenuation effects would be rather unspecific: Any sound in the temporal vicinity of the motor act would receive attenuated processing, not indicating a specific motor sensory prediction. Motor sensory prediction would only be formed in a later step, once contingency can be extrapolated from repeated pairing. For example, Horváth and colleagues (2012) previously suggested that sensory attenuation for self-initiated sounds reflects coincidence detection between button press and sound. However, the present data argue against this hypothesis. That is, although button press and sound were coincident in both voluntary and involuntary movements, no attenuation of the N1-P2 complex for self-initiated sounds was observed for involuntary motor acts. It has also been suggested that attenuation effects may be because of attentional differences between active and passive conditions. In particular, performing an action may briefly draw attention away from auditory processing, which results in attenuated auditory responses for sounds close to a button press (Horváth et al., 2012; Makeig, Müller, & Rockstroh, 1996). Our results demonstrate that the execution of the motor act per se is not sufficient to cause these effects. Nevertheless, it cannot be ruled out that the planning of the action draws attention away from the sounds, but the involuntary execution of the movement does not. Furthermore, it is also possible that attentional differences within the involuntary condition caused the observed absence of sensory attenuation effects. That is, it cannot be excluded that for TMS-induced involuntary movements the TMS pulse and the associated click become a cue for the subsequent unfamiliar sensation of such an involuntary movement, and for this reason, participants might have strongly attended to the clicks. In this case attention would be placed in the auditory channel when the actual sound comes, just after the click. The response to this presumably attended sound was then compared with the equivalent response during passive listening. During passive listening, participants may not have attended the TMS clicks so strongly given that, in this case, the TMS click was not followed by an unfamiliar involuntary movement. It is well known that attended sounds elicit higher amplitude auditory ERPs (Hillyard, 1981; Hillyard, Hink, Schwent, & Picton, 1973). Thus, within the involuntary condition, the auditory ERP elicited by the sounds during passive listening would be smaller than the auditory ERP elicited by self-initiated sounds. This hypothetical difference in the auditory response could potentially counteract the attenuation effect for self-initiated sounds and even eliminate it. This problem, however, would not be present in the voluntary condition, where no TMS pulses were applied. Thus, although in our point of view the present results favor the explanation of sensory attenuation effects depending on intentional, voluntary actions, an attentional explanation cannot be completely ruled out.
In summary, our findings demonstrate that the origin of the sensory attenuation of brain responses to self-initiated sounds is before motor cortex activation. The intention to move and the corresponding feeling of agency rather than the mere movement execution seem to play an essential role for the attenuation of the auditory N1-P2 complex. The present result is in favor of a predictive internal forward model account.
This study was supported by the Erasmus Mundus program of the European Union (2009-5259/003-001-EMA2), the Reinhart-Koselleck grant of the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG, Project 375/20-1), and the Natural Sciences and Engineering Research Council of Canada. It was realized using Cogent 2000 developed by the Cogent 2000 team at the FIL and the ICN and Cogent Graphics developed by John Romaya at the LON at the Wellcome Department of Imaging Neuroscience.
Reprint requests should be sent to Jana Timm, Kognitive einschließlich Biologische Psychologie, Institut für Psychologie, Universität Leipzig, Neumarkt 9-19, 04109 Leipzig, Germany, or via e-mail: email@example.com.