Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Jody C. Culham
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (7): 1493–1503.
Published: 01 July 2010
FIGURES
Abstract
View article
PDF
When exposed to novel dynamical conditions (e.g., externally imposed forces), neurologically intact subjects easily adjust motor commands on the basis of their own reaching errors. Subjects can also benefit from visual observation of others' kinematic errors. Here, using fMRI, we scanned subjects watching movies depicting another person learning to reach in a novel dynamic environment created by a robotic device. Passive observation of reaching movements (whether or not they were perturbed by the robot) was associated with increased activation in fronto-parietal regions that are normally recruited in active reaching. We found significant clusters in parieto-occipital cortex, intraparietal sulcus, as well as in dorsal premotor cortex. Moreover, it appeared that part of the network that has been shown to be engaged in processing self-generated reach error is also involved in observing reach errors committed by others. Specifically, activity in left intraparietal sulcus and left dorsal premotor cortex, as well as in right cerebellar cortex, was modulated by the amplitude of observed kinematic errors.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (5): 970–984.
Published: 01 May 2010
FIGURES
| View All (6)
Abstract
View article
PDF
In one popular account of the human visual system, two streams are distinguished, a ventral stream specialized for perception and a dorsal stream specialized for action. The skillful use of familiar tools, however, is likely to involve the cooperation of both streams. Using functional magnetic resonance imaging, we scanned individuals while they viewed short movies of familiar tools being grasped in ways that were either consistent or inconsistent with how tools are typically grasped during use. Typical-for-use actions were predicted to preferentially activate parietal areas important for tool use. Instead, our results revealed several areas within the ventral stream, as well as the left posterior middle temporal gyrus, as preferentially active for our typical-for-use actions. We believe these findings reflect sensitivity to learned semantic associations and suggest a special role for these areas in representing object-specific actions. We hypothesize that during actual tool use a complex interplay between the two streams must take place, with ventral stream areas providing critical input as to how an object should be engaged in accordance with stored semantic knowledge.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (6): 955–965.
Published: 01 July 2004
Abstract
View article
PDF
A common notion is that object perception is a necessary precursor to scene perception. Behavioral evidence suggests, however, that scene perception can operate independently of object perception. Further, neuroimaging has revealed a specialized human cortical area for viewing scenes that is anatomically distinct from areas activated by viewing objects. Here we show that an individual with visual form agnosia, D.F., who has a profound deficit in object recognition but spared color and visual texture perception, could still classify scenes and that she was fastest when the scenes were presented in the appropriate color. When scenes were presented as black-and-white images, she made a large number of errors in classification. Functional magnetic resonance imaging revealed selective activation in the parahippocampal place area (PPA) when D.F. viewed scenes. Unlike control observers, D.F. demonstrated higher activation in the PPA for scenes presented in the appropriate color than for black-and-white versions. The results demonstrate that an individual with profound form vision deficits can still use visual texture and color to classify scenes—and that this intact ability is reflected in differential activation of the PPA with colored versions of scenes.