This paper presents a new learning paradigm that consists of a Hebbian and anti-Hebbian learning. A layer of radial basis functions is adapted in an unsupervised fashion by minimizing a two-element cost function. The first element maximizes the output of each gaussian neuron and it can be seen as an implementation of the traditional Hebbian learning law. The second element of the cost function reinforces the competitive learning by penalizing the correlation between the nodes. Consequently, the second term has an “anti-Hebbian” effect that is learned by the gaussian neurons without the implementation of lateral inhibition synapses. Therefore, the decorrelated Hebbian learning (DHL) performs clustering in the input space avoiding the “nonbiological” winner-take-all rule. In addition to the standard clustering problem, this paper also presents an application of the DHL in function approximation. A scaled piece-wise linear approximation of a function is obtained in the supervised fashion within the local regions of its domain determined by the DHL. For comparison, a standard single hidden-layer gaussian network is optimized with the initial centers corresponding to the DHL. The efficiency of the algorithm is demonstrated on the chaotic Mackey-Glass time series.

This content is only available as a PDF.
You do not currently have access to this content.