Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Jens Christian Claussen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (2): 446–469.
Published: 01 February 2006
Abstract
View article
PDF
We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms that are localized learning, concave-convex learning, and winner-relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case, the results hold only for the one-dimensional case.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (5): 996–1009.
Published: 01 May 2005
Abstract
View article
PDF
A new family of self-organizing maps, the winner-relaxing Kohonen algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behavior is calculated analytically. For the original variant, a magnification exponent of 4/7 is derived; the generalized version allows steering the magnification in the wide range from exponent 1/2 to 1 in the one-dimensional case, thus providing optimal mapping in the sense of information theory. The winner-relaxing algorithm requires minimal extra computations per learning step and is conveniently easy to implement.