Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Jianfeng Feng
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2023) 35 (11): 1820–1849.
Published: 10 October 2023
FIGURES
| View All (6)
Abstract
View article
PDF
Neural activity in the brain exhibits correlated fluctuations that may strongly influence the properties of neural population coding. However, how such correlated neural fluctuations may arise from the intrinsic neural circuit dynamics and subsequently affect the computational properties of neural population activity remains poorly understood. The main difficulty lies in resolving the nonlinear coupling between correlated fluctuations with the overall dynamics of the system. In this study, we investigate the emergence of synergistic neural population codes from the intrinsic dynamics of correlated neural fluctuations in a neural circuit model capturing realistic nonlinear noise coupling of spiking neurons. We show that a rich repertoire of spatial correlation patterns naturally emerges in a bump attractor network and further reveals the dynamical regime under which the interplay between differential and noise correlations leads to synergistic codes. Moreover, we find that negative correlations may induce stable bound states between two bumps, a phenomenon previously unobserved in firing rate models. These noise-induced effects of bump attractors lead to a number of computational advantages including enhanced working memory capacity and efficient spatiotemporal multiplexing and can account for a range of cognitive and behavioral phenomena related to working memory. This study offers a dynamical approach to investigating realistic correlated neural fluctuations and insights to their roles in cortical computations.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (11): 3079–3105.
Published: 01 November 2009
FIGURES
| View All (8)
Abstract
View article
PDF
An expression for the probability distribution of the interspike interval of a leaky integrate-and-fire (LIF) model neuron is rigorously derived, based on recent theoretical developments in the theory of stochastic processes. This enables us to find for the first time a way of developing maximum likelihood estimates (MLE) of the input information (e.g., afferent rate and variance) for an LIF neuron from a set of recorded spike trains. Dynamic inputs to pools of LIF neurons both with and without interactions are efficiently and reliably decoded by applying the MLE, even within time windows as short as 25 msec.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (3): 621–640.
Published: 01 March 2002
Abstract
View article
PDF
What is the difference between the efferent spike train of a neuron with a large soma versus that of a neuron with a small soma? We propose an analytical method called the decoupling approach to tackle the problem. Two limiting cases—the soma is much smaller than the dendrite or vica versa—are theoretically investigated. For both the two-compartment integrate-and-fire model and Pinsky-Rinzel model, we show, both theoretically and numerically, that the smaller the soma is, the faster and the more irregularly the neuron fires. We further conclude, in terms of numerical simulations, that cells falling in between the two limiting cases form a continuum with respect to their firing properties (mean firing time and coefficient of variation of inter-spike intervals).
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (3): 671–692.
Published: 01 March 2000
Abstract
View article
PDF
For the integrate-and-fire model with or without reversal potentials, we consider how correlated inputs affect the variability of cellular output. For both models, the variability of efferent spike trains measured by coefficient of variation (CV) of the interspike interval is a nondecreasing function of input correlation. When the correlation coefficient is greater than 0.09, the CV of the integrate-and-fire model without reversal potentials is always above 0.5, no matter how strong the inhibitory inputs. When the correlation coefficient is greater than 0.05, CV for the integrate- and-fire model with reversal potentials is always above 0.5, independent of the strength of the inhibitory inputs. Under a given condition on correlation coefficients, we find that correlated Poisson processes can be decomposed into independent Poisson processes. We also develop a novel method to estimate the distribution density of the first passage time of the integrate-and-fire model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (1): 189–213.
Published: 01 January 1998
Abstract
View article
PDF
Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (1): 43–49.
Published: 01 January 1997
Abstract
View article
PDF
I construct Lyapunov functions for asynchronous dynamics and synch ronous dynamics of neural networks with nondifferentiable input-output characteristics.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (5): 1003–1019.
Published: 01 July 1996
Abstract
View article
PDF
The limiter function is used in many learning and retrieval models as the constraint controlling the magnitude of the weight or state vectors. In this paper, we developed a new method to relate the set of saturated fixed points to the set of system parameters of the models that use the limiter function, and then, as a case study, applied this method to Linsker's Hebbian learning network. We derived a necessary and sufficient condition to test whether a given saturated weight or state vector is stable or not for any given set of system parameters, and used this condition to determine the whole regime in the parameter space over which the given state is stable. This approach allows us to investigate the relative stability of the major receptive fields reported in Linsker's simulations, and to demonstrate the crucial role played by the synaptic density functions.