It remains a matter of controversy precisely what kind of neural mechanisms underlie functional asymmetries in speech processing. Whereas some studies support speech-specific circuits, others suggest that lateralization is dictated by relative computational demands of complex auditory signals in the spectral or time domains. To examine how the brain processes linguistically relevant spectral and temporal information, a functional magnetic resonance imaging study was conducted using Thai speech, in which spectral processing associated with lexical tones and temporal processing associated with vowel length can be differentiated. Ten Thai and 10 Chinese subjects were asked to perform discrimination judgments of pitch and timing patterns presented in the same auditory stimuli under two different conditions: speech (Thai) and nonspeech (hums). In the speech condition, tasks required judging Thai tones (T) and vowel length (VL); in the nonspeech condition, homologous pitch contours (P) and duration patterns (D). A remaining task required listening passively to nonspeech hums (L). Only the Thai group showed activation in the left inferior prefrontal cortex in speech minus nonspeech contrasts for spectral (T vs. P) and temporal (VL vs. D) cues. Thai and Chinese groups, however, exhibited similar fronto-parietal activation patterns in nonspeech hums minus passive listening contrasts for spectral (P vs. L) and temporal (D vs. L) cues. It appears that lower level specialization for acoustic cues in the spectral and temporal domains cannot be generalized to abstract higher order levels of phonological processing. Regardless of the neural mechanisms underlying low-level auditory processing, our findings clearly indicate that hemispheric specialization is sensitive to language-specific factors.