Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Rhonda B. Friedman
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Joshua D. McCall, Andrew T. DeMarco, Ayan S. Mandal, Mackenzie E. Fama, Candace M. van der Stelt ...
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2023) 35 (7): 1169–1194.
Published: 01 July 2023
FIGURES
| View All (8)
Abstract
View article
PDF
Despite the many mistakes we make while speaking, people can effectively communicate because we monitor our speech errors. However, the cognitive abilities and brain structures that support speech error monitoring are unclear. There may be different abilities and brain regions that support monitoring phonological speech errors versus monitoring semantic speech errors. We investigated speech, language, and cognitive control abilities that relate to detecting phonological and semantic speech errors in 41 individuals with aphasia who underwent detailed cognitive testing. Then, we used support vector regression lesion symptom mapping to identify brain regions supporting detection of phonological versus semantic errors in a group of 76 individuals with aphasia. The results revealed that motor speech deficits as well as lesions to the ventral motor cortex were related to reduced detection of phonological errors relative to semantic errors. Detection of semantic errors selectively related to auditory word comprehension deficits. Across all error types, poor cognitive control related to reduced detection. We conclude that monitoring of phonological and semantic errors relies on distinct cognitive abilities and brain regions. Furthermore, we identified cognitive control as a shared cognitive basis for monitoring all types of speech errors. These findings refine and expand our understanding of the neurocognitive basis of speech error monitoring.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2000) 12 (2): 281–297.
Published: 01 March 2000
Abstract
View article
PDF
Brain activation studies of orthographic stimuli typically start with the premise that different types of orthographic strings (e.g., words, pseudowords) differ from each other in discrete ways, which should be reflected in separate and distinct areas of brain activation. The present study starts from a different premise: Words, pseudowords, letterstrings, and false fonts vary systematically across a continuous dimension of familiarity to English readers. Using a one-back matching task to force encoding of the stimuli, the four types of stimuli were visually presented to healthy adult subjects while fMRI activations were obtained. Data analysis focused on parametric comparisons of fMRI activation sites. We did not find any region that was exclusively activated for real words. Rather, differences among these string types were mainly expressed as graded changes in the balance of activations among the regions. Our results suggests that there is a widespread network of brain regions that form a common network for the processing of all orthographic string types.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1994) 6 (1): 26–45.
Published: 01 January 1994
Abstract
View article
PDF
There are now numerous observations of subtle right hemisphere (RH) contributions to language comprehension. It has been suggested that these contributions reflect coarse semantic coding in the RH. That is, the RH weakly activates large semantic fields—including concepts distantly related to the input word—whereas the left hemisphere (LH) strongly activates small semantic fields—limited to concepts closely related to the input (Beeman, 1993a,b). This makes the RH less effective at interpreting single words, but more sensitive to semantic overlap of multiple words. To test this theory, subjects read target words preceded by either “Summation” primes (three words each weakly related to the target) or Unrelated primes (three unrelated words), and target exposure duration was manipulated so that subjects correctly named about half the target words in each hemifield. In Experiment 1, subjects benefited more from Summation primes when naming target words presented to the left visual field-RH (Ivf-RH) than when naming target words presented to the right visual field-LH (rvf-LH), suggesting a RH advantage in coarse semantic coding. In Experiment 2, with a low proportion of related prime-target trials, subjects benefited more from “Direct” primes (one strong associate flanked by two unrelated words) than from Summation primes for rvf-LH target words, indicating that the LH activates closely related information much more strongly than distantly related information. Subjects benefited equally from both prime types for Ivf-RH target words, indicating that the RH activates closely related information only slightly more strongly, at best, than distantly related information. This suggests that the RH processes words with relatively coarser coding than the LH, a conclusion consistent with a recent suggestion that the RH coarsely codes visual input (Kosslyn, Chabris, Mar-solek, & Koenig, 1992).