Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
G. Deco
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (8): 2085–2101.
Published: 15 November 1998
Abstract
View article
PDF
This article provides a detailed and rigorous analysis of the two commonly used methods for redundancy reduction: linear independent component analysis (ICA) posed as a direct minimization of a suitably chosen redundancy measure and information maximization (InfoMax) of a continuous stochastic signal transmitted through an appropriate nonlinear network. The article shows analytically that ICA based on the Kullback-Leibler information as a redundancy measure and InfoMax lead to the same solution if the parameterization of the output nonlinear functions in the latter method is sufficiently rich. Furthermore, this work discusses the alternative redundancy measures not based on the Kullback-Leibler information distance. The practical issues of applying ICA and InfoMax are also discussed and illustrated on the problem of extracting statistically independent factors from a linear, pixel-by-pixel mixture of images.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (2): 338–348.
Published: 01 March 1995
Abstract
View article
PDF
This paper presents a new learning paradigm that consists of a Hebbian and anti-Hebbian learning. A layer of radial basis functions is adapted in an unsupervised fashion by minimizing a two-element cost function. The first element maximizes the output of each gaussian neuron and it can be seen as an implementation of the traditional Hebbian learning law. The second element of the cost function reinforces the competitive learning by penalizing the correlation between the nodes. Consequently, the second term has an “anti-Hebbian” effect that is learned by the gaussian neurons without the implementation of lateral inhibition synapses. Therefore, the decorrelated Hebbian learning (DHL) performs clustering in the input space avoiding the “nonbiological” winner-take-all rule. In addition to the standard clustering problem, this paper also presents an application of the DHL in function approximation. A scaled piece-wise linear approximation of a function is obtained in the supervised fashion within the local regions of its domain determined by the DHL. For comparison, a standard single hidden-layer gaussian network is optimized with the initial centers corresponding to the DHL. The efficiency of the algorithm is demonstrated on the chaotic Mackey-Glass time series.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (1): 86–107.
Published: 01 January 1995
Abstract
View article
PDF
Controlling the network complexity in order to prevent overfitting is one of the major problems encountered when using neural network models to extract the structure from small data sets. In this paper we present a network architecture designed for use with a cost function that includes a novel complexity penalty term. In this architecture the outputs of the hidden units are strictly positive and sum to one, and their outputs are defined as the probability that the actual input belongs to a certain class formed during learning. The penalty term expresses the mutual information between the inputs and the extracted classes. This measure effectively describes the network complexity with respect to the given data in an unsupervised fashion. The efficiency of this architecture/penalty-term when combined with backpropagation training, is demonstrated on a real world economic time series forecasting problem. The model was also applied to the benchmark sunspot data and to a synthetic data set from the statistics community.