Abstract
The limiter function is used in many learning and retrieval models as the constraint controlling the magnitude of the weight or state vectors. In this paper, we developed a new method to relate the set of saturated fixed points to the set of system parameters of the models that use the limiter function, and then, as a case study, applied this method to Linsker's Hebbian learning network. We derived a necessary and sufficient condition to test whether a given saturated weight or state vector is stable or not for any given set of system parameters, and used this condition to determine the whole regime in the parameter space over which the given state is stable. This approach allows us to investigate the relative stability of the major receptive fields reported in Linsker's simulations, and to demonstrate the crucial role played by the synaptic density functions.