Abstract

The music we usually listen to in everyday life consists of either single melodies or harmonized melodies (i.e., of melodies “accompanied” by chords). However, differences in the neural mechanisms underlying melodic and harmonic processing have remained largely unknown. Using EEG, this study compared effects of music-syntactic processing between chords and melodies. In melody blocks, sequences consisted of five tones, the final tone being either regular or irregular (p = .5). Analogously, in chord blocks, sequences consisted of five chords, the final chord function being either regular or irregular. Melodies were derived from the top voice of chord sequences, allowing a proper comparison between melodic and harmonic processing. Music-syntactic incongruities elicited an early anterior negativity with a latency of approximately 125 msec in both the melody and the chord conditions. This effect was followed in the chord condition, but not in the melody condition, by an additional negative effect that was maximal at approximately 180 msec. Both effects were maximal at frontal electrodes, but the later effect was more broadly distributed over the scalp than the earlier effect. These findings indicate that melodic information (which is also contained in the top voice of chords) is processed earlier and with partly different neural mechanisms than harmonic information of chords.

You do not currently have access to this content.