Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
A. Norman Redlich
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (6): 1321–1340.
Published: 01 August 1996
Abstract
View articletitled, Statistical Approach to Shape from Shading: Reconstruction of Three-Dimensional Face Surfaces from Single Two-Dimensional Images
View
PDF
for article titled, Statistical Approach to Shape from Shading: Reconstruction of Three-Dimensional Face Surfaces from Single Two-Dimensional Images
The human visual system is proficient in perceiving three-dimensional shape from the shading patterns in a two-dimensional image. How it does this is not well understood and continues to be a question of fundamental and practical interest. In this paper we present a new quantitative approach to shape-from-shading that may provide some answers. We suggest that the brain, through evolution or prior experience, has discovered that objects can be classified into lower-dimensional object-classes as to their shape. Extraction of shape from shading is then equivalent to the much simpler problem of parameter estimation in a low-dimensional space. We carry out this proposal for an important class of three-dimensional (3D) objects: human heads. From an ensemble of several hundred laser-scanned 3D heads, we use principal component analysis to derive a low-dimensional parameterization of head shape space. An algorithm for solving shape-from-shading using this representation is presented. It works well even on real images where it is able to recover the 3D surface for a given person, maintaining facial detail and identity, from a single 2D image of his face. This algorithm has applications in face recognition and animation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (5): 750–766.
Published: 01 September 1993
Abstract
View articletitled, Supervised Factorial Learning
View
PDF
for article titled, Supervised Factorial Learning
Factorial learning , finding a statistically independent representation of a sensory “image”—a factorial code—is applied here to solve multilayer supervised learning problems that have traditionally required backpropagation. This lends support to Barlow's argument for factorial sensory processing, by demonstrating how it can solve actual pattern recognition problems. Two techniques for supervised factorial learning are explored, one of which gives a novel distributed solution requiring only positive examples . Also, a new nonlinear technique for factorial learning is introduced that uses neural networks based on almost reversible cellular automata . Due to the special functional connectivity of these networks—which resemble some biological microcircuits—learning requires only simple local algorithms. Also, supervised factorial learning is shown to be a viable alternative to backpropagation. One significant advantage is the existence of a measure for the performance of intermediate learning stages.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (2): 289–304.
Published: 01 March 1993
Abstract
View articletitled, Redundancy Reduction as a Strategy for Unsupervised Learning
View
PDF
for article titled, Redundancy Reduction as a Strategy for Unsupervised Learning
A redundancy reduction strategy, which can be applied in stages, is proposed as a way to learn as efficiently as possible the statistical properties of an ensemble of sensory messages. The method works best for inputs consisting of strongly correlated groups, that is features , with weaker statistical dependence between different features. This is the case for localized objects in an image or for words in a text. A local feature measure determining how much a single feature reduces the total redundancy is derived which turns out to depend only on the probability of the feature and of its components, but not on the statistical properties of any other features. The locality of this measure makes it ideal as the basis for a "neural" implementation of redundancy reduction, and an example of a very simple non-Hebbian algorithm is given. The effect of noise on learning redundancy is also discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (1): 45–60.
Published: 01 January 1993
Abstract
View articletitled, Convergent Algorithm for Sensory Receptive Field Development
View
PDF
for article titled, Convergent Algorithm for Sensory Receptive Field Development
An unsupervised developmental algorithm for linear maps is derived which reduces the pixel-entropy (using the measure introduced in previous work) at every update and thus removes pairwise correlations between pixels. Since the measure of pixel-entropy has only a global minimum the algorithm is guaranteed to converge to the minimum entropy map. Such optimal maps have recently been shown to possess cognitively desirable properties and are likely to be used by the nervous system to organize sensory information. The algorithm derived here turns out to be one proposed by Goodall for pairwise decorrelation. It is biologically plausible since in a neural network implementation it requires only data available locally to a neuron. In training over ensembles of two-dimensional input signals with the same spatial power spectrum as natural scenes, networks develop output neurons with center-surround receptive fields similar to those of ganglion cells in the retina. Some technical issues pertinent to developmental algorithms of this sort, such as “symmetry fixing,” are also discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (4): 559–572.
Published: 01 July 1992
Abstract
View articletitled, Understanding Retinal Color Coding from First Principles
View
PDF
for article titled, Understanding Retinal Color Coding from First Principles
A previously proposed theory of visual processing, based on redundancy reduction, is used to derive the retinal transfer function including color. The predicted kernels show the nontrivial mixing of space-time with color coding observed in experiments. The differences in color-coding between species are found to be due to differences among the chromatic autocorrelators for natural scenes in different environments.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (2): 196–210.
Published: 01 March 1992
Abstract
View articletitled, What Does the Retina Know about Natural Scenes?
View
PDF
for article titled, What Does the Retina Know about Natural Scenes?
By examining the experimental data on the statistical properties of natural scenes together with (retinal) contrast sensitivity data, we arrive at a first principle, theoretical hypothesis for the purpose of retinal processing and its relationship to an animal's environment. We argue that the retinal goal is to transform the visual input as much as possible into a statistically independent basis as the first step in creating a redundancy reduced representation in the cortex, as suggested by Barlow. The extent of this whitening of the input is limited, however, by the need to suppress input noise. Our explicit theoretical solutions for the retinal filters also show a simple dependence on mean stimulus luminance: they predict an approximate Weber law at low spatial frequencies and a De Vries-Rose law at high frequencies. Assuming that the dominant source of noise is quantum, we generate a family of contrast sensitivity curves as a function of mean luminance. This family is compared to psychophysical data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (3): 308–320.
Published: 01 September 1990
Abstract
View articletitled, Towards a Theory of Early Visual Processing
View
PDF
for article titled, Towards a Theory of Early Visual Processing
We propose a theory of the early processing in the mammalian visual pathway. The theory is formulated in the language of information theory and hypothesizes that the goal of this processing is to recode in order to reduce a “generalized redundancy” subject to a constraint that specifies the amount of average information preserved. In the limit of no noise, this theory becomes equivalent to Barlow's redundancy reduction hypothesis, but it leads to very different computational strategies when noise is present. A tractable approach for finding the optimal encoding is to solve the problem in successive stages where at each stage the optimization is performed within a restricted class of transfer functions. We explicitly find the solution for the class of encodings to which the parvocellular retinal processing belongs, namely linear and nondivergent transformations. The solution shows agreement with the experimentally observed transfer functions at all levels of signal to noise.