Abstract
This article develops a neural model of how sharp disparity tuning can arise through experience-dependent development of cortical complex cells. This learning process clarifies how complex cells can binocularly match left and right eye image features with the same contrast polarity, yet also pool signals with opposite contrast polarities. Antagonistic rebounds between LGN ON and OFF cells and cortical simple cells sensitive to opposite contrast polarities enable anticorrelated simple cells to learn to activate a shared set of complex cells. Feedback from binocularly tuned cortical cells to monocular LGN cells is proposed to carry out a matching process that dynamically stabilizes the learning process. This feedback represents a type of matching process that is elaborated at higher visual processing areas into a volitionally controllable type of attention. We show stable learning when both of these properties hold. Learning adjusts the initially coarsely tuned disparity preference to match the disparities present in the environment, and the tuning width decreases to yield high disparity selectivity, which enables the model to quickly detect image disparities. Learning is impaired in the absence of either antagonistic rebounds or corticogeniculate feedback. The model also helps to explain psychophysical and neurobiological data about adult 3-D vision.