Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Stephen McCullough
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (4): 517–533.
Published: 01 April 2013
FIGURES
| View All (7)
Abstract
View article
PDF
Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (11): 2480–2490.
Published: 01 November 2010
FIGURES
Abstract
View article
PDF
Can linguistic semantics affect neural processing in feature-specific visual regions? Specifically, when we hear a sentence describing a situation that includes motion, do we engage neural processes that are part of the visual perception of motion? How about if a motion verb was used figuratively, not literally? We used fMRI to investigate whether semantic content can “penetrate” and modulate neural populations that are selective to specific visual properties during natural language comprehension. Participants were presented audiovisually with three kinds of sentences: motion sentences (“The wild horse crossed the barren field.”), static sentences, (“The black horse stood in the barren field.”), and fictive motion sentences (“The hiking trail crossed the barren field.”). Motion-sensitive visual areas (MT+) were localized individually in each participant as well as face-selective visual regions (fusiform face area; FFA). MT+ was activated significantly more for motion sentences than the other sentence types. Fictive motion sentences also activated MT+ more than the static sentences. Importantly, no modulation of neural responses was found in FFA. Our findings suggest that the neural substrates of linguistic semantics include early visual areas specifically related to the represented semantics and that figurative uses of motion verbs also engage these neural systems, but to a lesser extent. These data are consistent with a view of language comprehension as an embodied process, with neural substrates as far reaching as early sensory brain areas that are specifically related to the represented semantics.