Abstract
Recurrent interactions in the primary visual cortex make its output a complex nonlinear transform of its input. This transform serves preattentive visual segmentation, that is, autonomously processing visual inputs to give outputs that selectively emphasize certain features for segmentation. An analytical understanding of the nonlinear dynamics of the recurrent neural circuit is essential to harness its computational power. We derive requirements on the neural architecture, components, and connection weights of a biologically plausible model of the cortex such that region segmentation, figure-ground segregation, and contour enhancement can be achieved simultaneously. In addition, we analyze the conditions governing neural oscillations, illusory contours, and the absence of visual hallucinations. Many of our analytical techniques can be applied to other recurrent networks with translation-invariant neural and connection structures.