Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Douglas Greve
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (2): 272–293.
Published: 15 February 2003
Abstract
View article
PDF
The aim of this study was to gain further insights into how the brain distinguishes between meaning and syntax during language comprehension. Participants read and made plausibility judgments on sentences that were plausible, morpho-syntactically anomalous, or pragmatically anomalous. In an event-related potential (ERP) experiment, morphosyntactic and pragmatic violations elicited significant P600 and N400 effects, respectively, replicating previous ERP studies that have established qualitative differences in processing conceptually and syntactic anomalies. Our main focus was a functional magnetic resonance imaging (fMRI) study in which the same subjects read the same sentences presented in the same pseudorandomized sequence while performing the same task as in the ERP experiment. Rapid-presentation event-related fMRI methods allowed us to estimate the hemodynamic response at successive temporal windows as the sentences unfolded word by word, without assumptions about the shape of the underlying response function. Relative to nonviolated sentences, the pragmatic anomalies were associated with an increased hemodynamic response in left temporal and inferior frontal regions and a decreased response in the right medial parietal cortex. Relative to nonviolated sentences, the morphosyntactic anomalies were associated with an increased response in bilateral medial and lateral parietal regions and a decreased response in left temporal and inferior frontal regions. Thus, overlapping neural networks were modulated in opposite directions to the two types of anomaly. These fMRI findings document both qualitative and quantitative differences in how the brain distinguishes between these two types of anomalies. This suggests that morphosyntactic and pragmatic information can be processed in different ways but by the same neural systems.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1994) 6 (4): 341–358.
Published: 01 July 1994
Abstract
View article
PDF
A neural model is described of how the brain may autonomously learn a body-centered representation of a three-dimensional (3-D) target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation—otherwise known as a parcellated distributed representation—of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of nonfoveated target position to learn a visuomotor representation of both foveated and nonfoveated target position that is capable of commanding yoked eye movements. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of Commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates are compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.