Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
David Saad
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (7): 1601–1622.
Published: 10 July 1997
Abstract
View article
PDF
An analytic investigation of the average case learning and generalization properties of radial basis function (RBFs) networks is presented, utilizing online gradient descent as the learning rule. The analytic method employed allows both the calculation of generalization error and the examination of the internal dynamics of the network. The generalization error and internal dynamics are then used to examine the role of the learning rate and the specialization of the hidden units, which gives insight into decreasing the time required for training. The realizable and some over realizable cases are studied in detail: the phase of learning in which the hidden units are unspecialized (symmetric phase) and the phase in which asymptotic convergence occurs are analyzed, and their typical properties found. Finally, simulations are performed that strongly confirm the analytic results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (1): 202–214.
Published: 01 January 1996
Abstract
View article
PDF
The generalization error is a widely used performance measure employed in the analysis of adaptive learning systems. This measure is generally critically dependent on the knowledge that the system is given about the problem it is trying to learn. In this paper we examine to what extent it is necessarily the case that an increase in the knowledge that the system has about the problem will reduce the generalization error. Using the standard definition of the generalization error, we present simple cases for which the intuitive idea of “reducivity”—that more knowledge will improve generalization—does not hold. Under a simple approximation, however, we find conditions to satisfy “reducivity.” Finally, we calculate the effect of a specific constraint on the generalization error of the linear perceptron, in which the signs of the weight components are fixed. This particular restriction results in a significant improvement in generalization performance.