Supervised learning corresponds to minimizing a loss or cost function expressing the differences between model predictions y and the target values t given by the training data. In neural networks, this means backpropagating error signals through the transposed weight matrixes from the output layer toward the input layer. For this, error signals in the output layer are typically initialized by the difference y t, which is optimal for several commonly used loss functions like cross-entropy or sum of squared errors. Here I evaluate a more general error initialization method using power functions y t for , corresponding to a new family of loss functions that generalize cross-entropy. Surprisingly, experiments on various learning tasks reveal that a proper choice of can significantly improve the speed and convergence of backpropagation learning, in particular in deep and recurrent neural networks. The results suggest two main reasons for the observed improvements. First, compared to cross-entropy, the new loss functions provide better fits to the distribution of error signals in the output layer and therefore maximize the model's likelihood more efficiently. Second, the new error initialization procedure may often provide a better gradient-to-loss ratio over a broad range of neural output activity, thereby avoiding flat loss landscapes with vanishing gradients.
Skip Nav Destination
Article navigation
August 2021
July 26 2021
Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification
In Special Collection:
CogNet
Andreas Knoblauch
Andreas Knoblauch
Albstadt-Sigmaringen University, Albstadt 72458, Germany [email protected]
Search for other works by this author on:
Andreas Knoblauch
Albstadt-Sigmaringen University, Albstadt 72458, Germany [email protected]
Received:
October 11 2020
Accepted:
March 11 2021
Online ISSN: 1530-888X
Print ISSN: 0899-7667
© 2021 Massachusetts Institute of Technology
2021
Massachusetts Institute of Technology
Neural Computation (2021) 33 (8): 2193–2225.
Article history
Received:
October 11 2020
Accepted:
March 11 2021
Citation
Andreas Knoblauch; Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification. Neural Comput 2021; 33 (8): 2193–2225. doi: https://doi.org/10.1162/neco_a_01407
Download citation file:
Sign in
Don't already have an account? Register
Client Account
You could not be signed in. Please check your email address / username and password and try again.
Could not validate captcha. Please try again.
Sign in via your Institution
Sign in via your InstitutionEmail alerts
Advertisement
Cited By
Related Articles
Relating Real-Time Backpropagation and Backpropagation-Through-Time: An Application of Flow Graph Interreciprocity
Neural Comput (March,1994)
Backpropagation with Homotopy
Neural Comput (May,1993)
Equivalence of Equilibrium Propagation and Recurrent Backpropagation
Neural Comput (February,2019)
Asymptotic Convergence of Backpropagation
Neural Comput (September,1989)
Related Book Chapters
Backpropagation Networks
Naturally Intelligent Systems
Backpropagation
Neural Network Learning and Expert Systems
Backpropagating Errors
The Deep Learning Revolution
Backpropagation: Variations and Applications
Neural Network Learning and Expert Systems