Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Martin I. Sereno
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1996) 8 (2): 89–106.
Published: 01 March 1996
Abstract
View article
PDF
Event-related brain potentials (ERPs) from 26 scalp sites were used to investigate whether or not and, if so, the extent to which the brain processes subserving the understanding of imageable written words and line drawings are identical. Sentences were presented one word at a time to 28 undergraduates for comprehension. Each sentence ended with either a written word (regular sentences) or with a line drawing (rebus sentences) that rendered it semantically congruous or semantically incongruous. For half of the subjects regular and rebus sentences were randomly intermixed whereas for the remaining half the regular and rebus sentences were presented in separate blocks (affording within-subject comparisons in both cases). In both presentation formats, words and line drawings generated greater negativity between 325 and 475 msec post-stimulus in ERPs to incongruous relative to congruous sentence endings (i.e., an N400-like effect). While the time course of this negativity was remarkably similar for words and pictures, there were notable differences in their scalp distributions; specifically, the classic N400 effect for words was larger posteriorly than it was for pictures. The congruity effect for pictures but not for words was also associated with a longer duration (lower frequency) negativity over frontal sites. In addition, under the mixed presentation mode, the N400 effect peaked about 30 msec earlier for pictures than for words. All in all, the data suggest that written words and pictures when they terminate sentences are processed similarly, but by at least partially nonoverlapping brain areas.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1993) 5 (2): 162–176.
Published: 01 April 1993
Abstract
View article
PDF
We describe a comprehensive linear approach to the problem of imaging brain activity with high temporal as well as spatial resolution based on combining EEG and MEG data with anatomical constraints derived from MRI images. The "inverse problem" of estimating the distribution of dipole strengths over the cortical surface is highly underdetermined, even given closely spaced EEG and MEG recordings. We have obtained much better solutions to this problem by explicitly incorporating both local cortical orientation as well as spatial covariance of sources and sensors into our formulation. An explicit polygonal model of the cortical manifold is first constructed as follows: (1) slice data in three orthogonal planes of section (needle-shaped voxels) are combined with a linear deblurring technique to make a single high-resolution 3-D image (cubic voxels), (2) the image is recursively flood-filled to determine the topology of the gray-white matter border, and (3) the resulting continuous surface is refined by relaxing it against the original 3-D gray-scale image using a deformable template method, which is also used to computationally flatten the cortex for easier viewing. The explicit solution to an error minimization formulation of an optimal inverse linear operator (for a particular cortical manifold, sensor placement, noise and prior source covariance) gives rise to a compact expression that is practically computable for hundreds of sensors and thousands of sources. The inverse solution can then be weighted for a particular (averaged) event using the sensor covariance for that event. Model studies suggest that we may be able to localize multiple cortical sources with spatial resolution as good as PET with this technique, while retaining a much finer grained picture of activity over time.