Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Karin Haese
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (3): 595–619.
Published: 01 March 2001
Abstract
View article
PDF
An important technique for exploratory data analysis is to form a mapping from the high-dimensional data space to a low-dimensional representation space such that neighborhoods are preserved. A popular method for achieving this is Kohonen's self-organizing map (SOM) algorithm. However, in its original form, this requires the user to choose the values of several parameters heuristically to achieve good performance. Here we present the Auto-SOM, an algorithm that estimates the learning parameters during the training of SOMs automatically. The application of Auto-SOM provides the facility to avoid neighborhood violations up to a user-defined degree in either mapping direction. Auto-SOM consists of a Kalman filter implementation of the SOM coupled with a recursive parameter estimation method. The Kalman filter trains the neurons' weights with estimated learning coefficients so as to minimize the variance of the estimation error. The recursive parameter estimation method estimates the width of the neighborhood function by minimizing the prediction error variance of the Kalman filter. In addition, the “topographic function” is incorporated to measure neighborhood violations and prevent the map's converging to configurations with neighborhood violations. It is demonstrated that neighborhoods can be preserved in both mapping directions as desired for dimension-reducing applications. The development of neighborhood-preserving maps and their convergence behavior is demonstrated by three examples accounting for the basic applications of self-organizing feature maps.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (5): 1211–1233.
Published: 01 July 1999
Abstract
View article
PDF
The self-organizing learning algorithm of Kohonen and most of its extensions are controlled by two learning parameters, the learning coefficient and the width of the neighborhood function, which have to be chosen empirically because neither rules nor methods for their calculation exist. Consequently, often time-consuming parameter studies precede neighborhood-preserving feature maps of the learning data. To circumvent those lengthy numerical studies, this article describes the learning process by a state-space model in order to use the linear Kalman filter algorithm training the feature maps. Then the Kalman filter equations calculate the learning coefficient online during the training, while the width of the neighborhood function needs to be estimated by a second extended Kalman filter for the process of neighborhood preservation. The performance of the Kalman filter implementation is demonstrated on toy problems as well as on a crab classification problem. The results of crab classification are compared to those of generative topographic mapping, an alternative method to the self-organizing feature map.