Abstract
The N1/P2 amplitude reduction for self-generated tones in comparison to external tones in EEG, which has recently also been described for action observation, is an example of the so-called sensory attenuation. Whether this effect is dependent on motor-based or general predictive mechanisms is unclear. Using a paradigm, in which actions (button presses) elicited tones in only half the trials, this study examined how the processing of the tones is modulated by the prediction error in each trial in a self-performed action compared with action observation. In addition, we considered the effect of temporal predictability by adding a third condition, in which visual cues were followed by external tones in half the trials. The attenuation result patterns differed for N1 and P2 amplitudes, but neither showed an attenuation effect beyond temporal predictability. Interestingly, we found that both N1 and P2 amplitudes reflected prediction errors derived from a reinforcement learning model, in that larger errors coincided with larger amplitudes. This effect was stronger for tones following button presses compared with cued external tones, but only for self-performed and not for observed actions. Taken together, our results suggest that attenuation effects are partially driven by general predictive mechanisms irrespective of self-performed actions. However, the stronger prediction-error effects for self-generated tones suggest that distinct motor-related factors beyond temporal predictability, potentially linked to reinforcement learning, play a role in the underlying mechanisms. Further research is needed to validate these initial findings as the calculation of the prediction errors was limited by the design of the experiment.