Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Steven L. Small
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (12): 2013–2029.
Published: 01 November 2006
Abstract
View article
PDF
We examined whether the repeated processing of spoken sentences is accompanied by reduced bold oxygenation level-dependent response (repetition suppression) in regions implicated in sentence comprehension and whether the magnitude of such suppression depends on the task under which the sentences are comprehended or on the complexity of the sentences. We found that sentence repetition was associated with repetition suppression in temporal regions, independent of whether participants judged the sensibility of the statements or listened to the statements passively. In contrast, repetition suppression in inferior frontal regions was found only in the context of the task demanding active judgment. These results suggest that repetition suppression in temporal regions reflects facilitation of sentence comprehension processing per se, whereas in frontal regions it reflects, at least in part, easier execution of specific psycholinguistic judgments.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (10): 1818–1829.
Published: 01 December 2004
Abstract
View article
PDF
Social stimuli function as emotional barometers for the immediate environment are the catalysts for many emotional reactions, and have inherent value for relationships and survival independent of their current emotional content. We, therefore, propose that the neural mechanisms underlying social and emotional information processing may be interconnected. In the current study, we examined the independent and interactive effects of social and emotional processes on brain activation. Whole-brain images were acquired while participants viewed and categorized affective pictures that varied on two dimensions: emotional content (i.e., neutral, emotional) and social content (i.e., faces/people, objects/scenes). Patterns of activation were consistent with past findings demonstrating that the amygdala and part of the visual cortex were more active to emotionally evocative pictures than to neutral pictures and that the superior temporal sulcus was more active to social than to nonsocial pictures. Furthermore, activation of the superior temporal sulcus and middle occipito-temporal cortex showed evidence of the interactive processing of emotional and social information, whereas activation of the amygdala showed evidence of additive effects. These results indicate that interactive effects occur early in the stream of processing, suggesting that social and emotional information garner greater attentional resources and that the conjunction of social and emotional cues results in synergistic early processing, whereas the amygdala appears to be primarily implicated in processing biologically or personally relevant stimuli, regardless of the nature of the relevance (i.e., social, emotional, or both).
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (7): 1173–1184.
Published: 01 September 2004
Abstract
View article
PDF
To recognize phonemes across variation in talkers, listeners can use information about vocal characteristics, a process referred to as “talker normalization.” The present study investigates the cortical mechanisms underlying talker normalization using fMRI. Listeners recognized target words presented in either a spoken list produced by a single talker or a mix of different talkers. It was found that both conditions activate an extensive cortical network. However, recognizing words in the mixed-talker condition, relative to the blocked-talker condition, activated middle/superior temporal and superior parietal regions to a greater degree. This temporal– parietal network is possibly associated with selectively attending and processing spectral and spatial acoustic cues required in recognizing speech in a mixed-talker condition.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2000) 12 (4): 679–690.
Published: 01 July 2000
Abstract
View article
PDF
Phonological processes map sound information onto higher levels of language processing and provide the mechanisms by which verbal information can be temporarily stored in working memory. Despite a strong convergence of data suggesting both left lateralization and distributed encoding in the anterior and posterior perisylvian language areas, the nature and brain encoding of phonological subprocesses remain ambiguous. The present study used functional magnetic resonance imaging (fMRT) to investigate the conditions under which anterior (lateral frontal) areas are activated during speech-discrimination tasks that differ in segmental processing demands. In two experiments, subjects performed “same/ different” judgments on the first sound of pairs of words. In the first experiment, the speech stimuli did not require overt segmentation of the initial consonant from the rest of the word, since the “different” pairs only varied in the phonetic voicing of the initial consonant (e.g., dip-tip ). In the second experiment, the speech stimuli required segmentation since “different” pairs both varied in initial consonant voicing and contained different vowels and final consonants (e.g., dip-ten). These speech conditions were compared to a tone-discrimination control condition. Behavioral data showed that subjects were highly accurate in both experiments, but revealed different patterns of reaction-time latencies between the two experiments. The imaging data indicated that whereas both speech conditions showed superior temporal activation when compared to tone discrimination, only the second experiment showed consistent evidence of frontal activity. Taken together, the results of Experiments 1 and 2 suggest that phonological processing per se does not necessarily recruit frontal areas. We postulate that frontal activation is a product of segmentation processes in speech perception, or alternatively, working memory demands required for such processing.