Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Hung-Hsuan Chen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Associated Learning: Decomposing End-to-End Backpropagation Based on Autoencoders and Target Propagation
UnavailablePublisher: Journals Gateway
Neural Computation (2021) 33 (1): 174–193.
Published: 01 January 2021
FIGURES
| View All (4)
Abstract
View articletitled, Associated Learning: Decomposing End-to-End Backpropagation Based on Autoencoders and Target Propagation
View
PDF
for article titled, Associated Learning: Decomposing End-to-End Backpropagation Based on Autoencoders and Target Propagation
Backpropagation (BP) is the cornerstone of today's deep learning algorithms, but it is inefficient partially because of backward locking, which means updating the weights of one layer locks the weight updates in the other layers. Consequently, it is challenging to apply parallel computing or a pipeline structure to update the weights in different layers simultaneously. In this letter, we introduce a novel learning structure, associated learning (AL), that modularizes the network into smaller components, each of which has a local objective. Because the objectives are mutually independent, AL can learn the parameters in different layers independently and simultaneously, so it is feasible to apply a pipeline structure to improve the training throughput. Specifically, this pipeline structure improves the complexity of the training time from O ( n ℓ ) , which is the time complexity when using BP and stochastic gradient descent (SGD) for training, to O ( n + ℓ ) , where n is the number of training instances and ℓ is the number of hidden layers. Surprisingly, even though most of the parameters in AL do not directly interact with the target variable, training deep models by this method yields accuracies comparable to those from models trained using typical BP methods, in which all parameters are used to predict the target variable. Consequently, because of the scalability and the predictive power demonstrated in the experiments, AL deserves further study to determine the better hyperparameter settings, such as activation function selection, learning rate scheduling, and weight initialization, to accumulate experience, as we have done over the years with the typical BP method. In addition, perhaps our design can also inspire new network designs for deep learning. Our implementation is available at https://github.com/SamYWK/Associated_Learning .