Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Bertram E. Shi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (7): 1496–1529.
Published: 01 July 2015
FIGURES
| View All (25)
Abstract
View article
PDF
Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invariant feature detectors: temporal slowness and sparsity. This model, the generative adaptive subspace self-organizing map (GASSOM), extends Kohonen’s adaptive subspace self-organizing map (ASSOM) with a generative model of the input. Each observation is assumed to be generated by one among many nodes in the network, each being associated with a different subspace in the space of all observations. The generating nodes evolve according to a first-order Markov chain and generate inputs that lie close to the associated subspace. This model differs from prior approaches in that temporal slowness is not an externally imposed criterion to be maximized during learning but, rather, an emergent property of the model structure as it seeks a good model of the input statistics. Unlike the ASSOM, the GASSOM does not require an explicit segmentation of the input training vectors into separate episodes. This enables us to apply this model to an unlabeled naturalistic image sequence generated by a realistic eye movement model. We show that the emergence of temporal slowness within the model improves the invariance of feature detectors trained on this input.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (3): 730–751.
Published: 01 March 2010
FIGURES
| View All (9)
Abstract
View article
PDF
We present a simple optimization criterion that leads to autonomous development of a sensorimotor feedback loop driven by the neural representation of the depth in the mammalian visual cortex. Our test bed is an active stereo vision system where the vergence angle between the two eyes is controlled by the output of a population of disparity-selective neurons. By finding a policy that maximizes the total response across the neuron population, the system eventually tracks a target as it moves in depth. We characterized the tracking performance of the resulting policy using objects moving both sinusoidally and randomly in depth. Surprisingly, the system can even learn how to track based on stimuli it cannot track: even though the closed loop 3 dB tracking bandwidth of the system is 0.3 Hz, correct tracking policies are learned for input stimuli moving as fast as 0.75 Hz.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (10): 2464–2490.
Published: 01 October 2008
Abstract
View article
PDF
Binocular fusion takes place over a limited region smaller than one degree of visual angle (Panum's fusional area), which is on the order of the range of preferred disparities measured in populations of disparity-tuned neurons in the visual cortex. However, the actual range of binocular disparities encountered in natural scenes extends over tens of degrees. This discrepancy suggests that there must be a mechanism for detecting whether the stimulus disparity is inside or outside the range of the preferred disparities in the population. Here, we compare the efficacy of several features derived from the population responses of phase-tuned disparity energy neurons in differentiating between in-range and out-of-range disparities. Interestingly, some features that might be appealing at first glance, such as the average activation across the population and the difference between the peak and average responses, actually perform poorly. On the other hand, normalizing the difference between the peak and average responses results in a reliable indicator. Using a probabilistic model of the population responses, we improve classification accuracy by combining multiple features. A decision rule that combines the normalized peak to average difference and the peak location significantly improves performance over decision rules based on either measure in isolation. In addition, classifiers using normalized difference are also robust to mismatch between the image statistics assumed by the model and the actual image statistics.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (8): 1579–1600.
Published: 01 August 2004
Abstract
View article
PDF
The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. This letter describes an electronic implementation of a single binocularly tuned complex cell based on the binocular energy model, which has been proposed to model disparity-tuned complex cells in the mammalian primary visual cortex. Our system consists of two silicon retinas representing the left and right eyes, two silicon chips containing retinotopic arrays of spiking neurons with monocular Gabor-type spatial receptive fields, and logic circuits that combine the spike outputs to compute a disparity-selective complex cell response. The tuned disparity can be adjusted electronically by introducing either position or phase shifts between the monocular receptive field profiles. Mismatch between the monocular receptive field profiles caused by transistor mismatch can degrade the relative responses of neurons tuned to different disparities. In our system, the relative responses between neurons tuned by phase encoding are better matched than neurons tuned by position encoding. Our numerical sensitivity analysis indicates that the relative responses of phase-encoded neurons that are least sensitive to the receptive field parameters vary the most in our system. We conjecture that this robustness may be one reason for the existence of phase-encoded disparity-tuned neurons in biological neural systems.