Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Thorsteinn Rögnvaldsson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (5): 916–926.
Published: 01 September 1994
Abstract
View article
PDF
The Langevin updating rule, in which noise is added to the weights during learning, is presented and shown to improve learning on problems with initially ill-conditioned Hessians. This is particularly important for multilayer perceptrons with many hidden layers, that often have ill-conditioned Hessians. In addition, Manhattan updating is shown to have a similar effect.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (3): 483–491.
Published: 01 May 1993
Abstract
View article
PDF
The discrimination powers of multilayer perceptron (MLP) and learning vector quantization (LVQ) networks are compared for overlapping gaussian distributions. It is shown, both analytically and with Monte Carlo studies, that the MLP network handles high-dimensional problems in a more efficient way than LVQ. This is mainly due to the sigmoidal form of the MLP transfer function, but also to the fact that the MLP uses hyperplanes more efficiently. Both algorithms are equally robust to limited training sets and the learning curves fall off like 1/ M , where M is the training set size, which is compared to theoretical predictions from statistical estimates and Vapnik-Chervonenkis bounds.