Skip Nav Destination

*PDF*
*PDF*

Update search

### NARROW

Format

Journal

Date

Availability

1-2 of 2

David Brown

Close
**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2000) 12 (3): 671–692.

Published: 01 March 2000

Abstract

View article
For the integrate-and-fire model with or without reversal potentials, we consider how correlated inputs affect the variability of cellular output. For both models, the variability of efferent spike trains measured by coefficient of variation (CV) of the interspike interval is a nondecreasing function of input correlation. When the correlation coefficient is greater than 0.09, the CV of the integrate-and-fire model without reversal potentials is always above 0.5, no matter how strong the inhibitory inputs. When the correlation coefficient is greater than 0.05, CV for the integrate- and-fire model with reversal potentials is always above 0.5, independent of the strength of the inhibitory inputs. Under a given condition on correlation coefficients, we find that correlated Poisson processes can be decomposed into independent Poisson processes. We also develop a novel method to estimate the distribution density of the first passage time of the integrate-and-fire model.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(1998) 10 (1): 189–213.

Published: 01 January 1998

Abstract

View article
Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models. Abstract Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models.