Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Alexander Grunewald
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1998) 10 (2): 199–215.
Published: 01 March 1998
Abstract
View article
PDF
This article develops a neural model of how sharp disparity tuning can arise through experience-dependent development of cortical complex cells. This learning process clarifies how complex cells can binocularly match left and right eye image features with the same contrast polarity, yet also pool signals with opposite contrast polarities. Antagonistic rebounds between LGN ON and OFF cells and cortical simple cells sensitive to opposite contrast polarities enable anticorrelated simple cells to learn to activate a shared set of complex cells. Feedback from binocularly tuned cortical cells to monocular LGN cells is proposed to carry out a matching process that dynamically stabilizes the learning process. This feedback represents a type of matching process that is elaborated at higher visual processing areas into a volitionally controllable type of attention. We show stable learning when both of these properties hold. Learning adjusts the initially coarsely tuned disparity preference to match the disparities present in the environment, and the tuning width decreases to yield high disparity selectivity, which enables the model to quickly detect image disparities. Learning is impaired in the absence of either antagonistic rebounds or corticogeniculate feedback. The model also helps to explain psychophysical and neurobiological data about adult 3-D vision.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1997) 9 (1): 117–132.
Published: 01 January 1997
Abstract
View article
PDF
How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize desynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts that belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent different stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.