Many models of spoken word recognition posit that the acoustic stream is parsed into phoneme level units, which in turn activate larger representations [McClelland, J. L., & Elman, J. L. The TRACE model of speech perception. Cognitive Psychology, 18, 1–86, 1986], whereas others suggest that larger units of analysis are activated without the need for segmental mediation [Greenberg, S. A multitier theoretical framework for understanding spoken language. In S. Greenberg & W. A. Ainsworth (Eds.), Listening to speech: An auditory perspective (pp. 411–433). Mahwah, NJ: Erlbaum, 2005; Klatt, D. H. Speech perception: A model of acoustic-phonetic analysis and lexical access. Journal of Phonetics, 7, 279–312, 1979; Massaro, D. W. Preperceptual images, processing time, and perceptual units in auditory perception. Psychological Review, 79, 124–145, 1972]. Identifying segmental effects in the brain's response to speech may speak to this question. For example, if such effects were localized to relatively early processing stages in auditory cortex, this would support a model of speech recognition in which segmental units are explicitly parsed out. In contrast, segmental processes that occur outside auditory cortex may indicate that alternative models should be considered. The current fMRI experiment manipulated the phonotactic frequency (PF) of words that were auditorily presented in short lists while participants performed a pseudoword detection task. PF is thought to modulate networks in which phoneme level units are represented. The present experiment identified activity in the left inferior frontal gyrus that was positively correlated with PF. No effects of PF were found in temporal lobe regions. We propose that the observed phonotactic effects during speech listening reflect the strength of the association between acoustic speech patterns and articulatory speech codes involving phoneme level units. On the basis of existing lesion evidence, we interpret the function of this auditory–motor association as playing a role primarily in production. These findings are consistent with the view that phoneme level units are not necessarily accessed during speech recognition.

You do not currently have access to this content.