Spoken language production involves lexical-semantic access and phonological encoding. A theoretically important question concerns the relative time course of these two cognitive processes. The predominant view has been that semantic and phonological codes are accessed in successive stages. However, recent evidence seems difficult to reconcile with a sequential view but rather suggests that both types of codes are accessed in parallel. Here, we used ERPs combined with the “blocked cyclic naming paradigm” in which items overlapped either semantically or phonologically. Behaviorally, both semantic and phonological overlap caused interference relative to unrelated baseline conditions. Crucially, ERP data demonstrated that the semantic and phonological effects emerged at a similar latency (∼180 msec after picture onset) and within a similar time window (180–380 msec). These findings suggest that access to phonological information takes place at a relatively early stage during spoken planning, largely in parallel with semantic processing.

You do not currently have access to this content.