Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Feifei Zhao
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (1): 172–189.
Published: 01 January 2022
Abstract
View article
PDF
Neural networks with a large number of parameters are prone to overfitting problems when trained on a relatively small training set. Introducing weight penalties of regularization is a promising technique for solving this problem. Taking inspiration from the dynamic plasticity of dendritic spines, which plays an important role in the maintenance of memory, this letter proposes a brain-inspired developmental neural network based on dendritic spine dynamics (BDNN-dsd). The dynamic structure changes of dendritic spines include appearing, enlarging, shrinking, and disappearing. Such spine plasticity depends on synaptic activity and can be modulated by experiences—in particular, long-lasting synaptic enhancement/suppression (LTP/LTD), coupled with synapse formation (or enlargement)/elimination (or shrinkage), respectively. Subsequently, spine density characterizes an approximate estimate of the total number of synapses between neurons. Motivated by this, we constrain the weight to a tunable bound that can be adaptively modulated based on synaptic activity. Dynamic weight bound could limit the relatively redundant synapses and facilitate the contributing synapses. Extensive experiments demonstrate the effectiveness of our method on classification tasks of different complexity with the MNIST, Fashion MNIST, and CIFAR-10 data sets. Furthermore, compared to dropout and L2 regularization algorithms, our method can improve the network convergence rate and classification performance even for a compact network.