Humans can derive sequential dependencies from unfamiliar artificial speech within several of minutes of exposure. However, there is an ongoing debate about the nature of the underlying learning processes. In a widely discussed study Peña et al. [Peña, M., Bonatti, L. L., Nespor, M., & Mehler, J. Signal-driven computations in speech processing. Science, 298, 604–607, 2002] argued for the importance of subtle acoustic cues in the signal, such as pauses, in order to switch between two computational mechanisms, which are conceptualized as rule-based versus statistical. The present study was aimed to approach this problem by recording event-related potentials in response to correct and incorrect phrases consisting of bisyllabics after short exposure to either rule-based or random artificial speech streams. Rule-based streams contained dependencies of the form AXC, whereby A elements reliably predicted the C elements and X elements were variable. Participants were exposed to four input and test phases. Two of the input streams were rule-based and contained either only probabilistic information related to the distribution of the AXC stimuli or an additional acoustic cue indicating the boundaries of relevant units. The other two streams were random variations of the rule-based streams. During the test phase in the condition with pause cues, an early negativity and a later positivity emerged for correct and incorrect items in comparison to their acoustically identical counterparts, which were presented after the random control condition. In the noncued condition, only negativities were seen. The timing and the scalp distribution of the negativities were different for correct and incorrect sequences in both the cued and the noncued conditions. The results are interpreted in support of a view of grammatical learning in which both distributional and acoustic cues may contribute to different aspects of syntactic learning.

This content is only available as a PDF.
You do not currently have access to this content.