Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Deborah A. Hall
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (6): 1063–1078.
Published: 01 June 2008
Abstract
View article
PDF
Cognitive control over conflicting information has been studied extensively using tasks such as the color-word Stroop, flanker, and spatial conflict task. Neuroimaging studies typically identify a fronto-parietal network engaged in conflict processing, but numerous additional regions are also reported. Ascribing putative functional roles to these regions is problematic because some may have less to do with conflict processing per se, but could be engaged in specific processes related to the chosen stimulus modality, stimulus feature, or type of conflict task. In addition, some studies contrast activation on incongruent and congruent trials, even though a neutral baseline is needed to separate the effect of inhibition from that of facilitation. In the first part of this article, we report a systematic review of 34 neuroimaging publications, which reveals that conflict-related activity is reliably reported in the anterior cingulate cortex and bilaterally in the lateral prefrontal cortex, the anterior insula, and the parietal lobe. In the second part, we further explore these candidate “conflict” regions through a novel functional magnetic resonance imaging experiment, in which the same group of subjects perform related visual and auditory Stroop tasks. By carefully controlling for the same task (Stroop), the same to-be-ignored stimulus dimension (word meaning), and by separating out inhibitory processes from those of facilitation, we attempt to minimize the potential differences between the two tasks. The results provide converging evidence that the regions identified by the systematic review are reliably engaged in conflict processing. Despite carefully matching the Stroop tasks, some regions of differential activity remained, particularly in the parietal cortex. We discuss some of the task-specific processes which might account for this finding.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (6): 939–953.
Published: 01 June 2005
Abstract
View article
PDF
Listeners are able to extract important linguistic information by viewing the talker's face—a process known as “speechreading.” Previous studies of speechreading present small closed sets of simple words and their results indicate that visual speech processing engages a wide network of brain regions in the temporal, frontal, and parietal lobes that are likely to underlie multiple stages of the receptive language system. The present study further explored this network in a large group of subjects by presenting naturally spoken sentences which tap the richer complexities of visual speech processing. Four different baselines (blank screen, static face, nonlinguistic facial gurning, and auditory speech) enabled us to determine the hierarchy of neural processing involved in speechreading and to test the claim that visual input reliably accesses sound-based representations in the auditory cortex. In contrast to passively viewing a blank screen, the static-face condition evoked activation bilaterally across the border of the fusiform gyrus and cerebellum, and in the medial superior frontal gyrus and left precentral gyrus (p < .05, whole brain corrected). With the static face as baseline, the gurning face evoked bilateral activation in the motion-sensitive region of the occipital cortex, whereas visual speech additionally engaged the middle temporal gyrus, inferior and middle frontal gyri, and the inferior parietal lobe, particularly in the left hemisphere. These latter regions are implicated in lexical stages of spoken language processing. Although auditory speech generated extensive bilateral activation across both superior and middle temporal gyri, the group-averaged pattern of speechreading activation failed to include any auditory regions along the superior temporal gyrus, suggesting that fluent visual speech does not always involve sound-based coding of the visual input. An important finding from the individual subject analyses was that activation in the superior temporal gyrus did reach significance (p < .001, small-volume corrected) for a subset of the group. Moreover, the extent of the left-sided superior temporal gyrus activity was strongly correlated with speech-reading performance. Skilled speechreading was also associated with activations and deactivations in other brain regions, suggesting that individual differences reflect the efficiency of a circuit linking sensory, perceptual, memory, cognitive, and linguistic processes rather than the operation of a single component process.