Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Yu He
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Asymptotic Convergence of Backpropagation
UnavailablePublisher: Journals Gateway
Neural Computation (1989) 1 (3): 382–391.
Published: 01 September 1989
Abstract
View articletitled, Asymptotic Convergence of Backpropagation
View
PDF
for article titled, Asymptotic Convergence of Backpropagation
We calculate analytically the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. For networks without hidden units using the standard quadratic error function and a sigmoidal transfer function, we find that the error decreases as 1/ t for large t , and the output states approach their target values as 1/√ t . It is possible to obtain a different convergence rate for certain error and transfer functions, but the convergence can never be faster than 1/ t. These results are unaffected by a momentum term in the learning algorithm, but convergence can be substantially improved by an adaptive learning rate scheme. For networks with hidden units, we generally expect the same rate of convergence to be obtained as in the single-layer case; however, under certain circumstances one can obtain a polynomial speed-up for non sigmoidal units, or a logarithmic speed-up for sigmoidal units. Our analytic results are confirmed by empirical measurements of the convergence rate in numerical simulations.