Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
James V. Stone
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (7): 1559–1574.
Published: 01 July 2001
Abstract
View article
PDF
A measure of temporal predictability is defined and used to separate linear mixtures of signals. Given any set of statistically independent source signals, it is conjectured here that a linear mixture of those signals has the following property: the temporal predictability of any signal mixture is less than (or equal to) that of any of its component source signals. It is shown that this property can be used to recover source signals from a set of linear mixtures of those signals by finding an un-mixing matrix that maximizes a measure of temporal predictability for each recovered signal. This matrix is obtained as the solution to a generalized eigenvalue problem; such problems have scaling characteristics of O ( N 3 ), where N is the number of signal mixtures. In contrast to independent component analysis, the temporal predictability method requires minimal assumptions regarding the probability density functions of source signals. It is demonstrated that the method can separate signal mixtures in which each mixture is a linear combination of source signals with supergaussian, sub-gaussian, and gaussian probability density functions and on mixtures of voices and music.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (7): 1463–1492.
Published: 01 October 1996
Abstract
View article
PDF
A model is presented for unsupervised learning of low level vision tasks, such as the extraction of surface depth. A key assumption is that perceptually salient visual parameters (e.g., surface depth) vary smoothly over time. This assumption is used to derive a learning rule that maximizes the long-term variance of each unit's outputs, whilst simultaneously minimizing its short-term variance. The length of the half-life associated with each of these variances is not critical to the success of the algorithm. The learning rule involves a linear combination of anti-Hebbian and Hebbian weight changes, over short and long time scales, respectively. This maximizes the information throughput with respect to low-frequency parameters implicit in the input sequence. The model is used to learn stereo disparity from temporal sequences of random-dot and gray-level stereograms containing synthetically generated subpixel disparities. The presence of temporal discontinuities in disparity does not prevent learning or generalization to previously unseen image sequences. The implications of this class of unsupervised methods for learning in perceptual systems are discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (4): 697–704.
Published: 01 May 1996
Abstract
View article
PDF
We demonstrate that imperfect recall of a set of associations can usually be improved by training on a new, unrelated set of associations. This spontaneous recovery of associations is a consequence of the high dimensionality of weight spaces, and is therefore not peculiar to any single type of neural net. Accordingly, this work may have implications for spontaneous recovery of memory in the central nervous system.