Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Jianfeng Feng
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2025) 37 (3): 481–521.
Published: 14 February 2025
FIGURES
| View All (12)
Abstract
View articletitled, Toward a Free-Response Paradigm of Decision Making in Spiking Neural Networks
View
PDF
for article titled, Toward a Free-Response Paradigm of Decision Making in Spiking Neural Networks
Spiking neural networks (SNNs) have attracted significant interest in the development of brain-inspired computing systems due to their energy efficiency and similarities to biological information processing. In contrast to continuous-valued artificial neural networks, which produce results in a single step, SNNs require multiple steps during inference to achieve a desired accuracy level, resulting in a burden in real-time response and energy efficiency. Inspired by the tradeoff between speed and accuracy in human and animal decision-making processes, which exhibit correlations among reaction times, task complexity, and decision confidence, an inquiry emerges regarding how an SNN model can benefit by implementing these attributes. Here, we introduce a theory of decision making in SNNs by untangling the interplay between signal and noise. Under this theory, we introduce a new learning objective that trains an SNN not only to make the correct decisions but also to shape its confidence. Numerical experiments demonstrate that SNNs trained in this way exhibit improved confidence expression, reduced trial-to-trial variability, and shorter latency to reach the desired accuracy. We then introduce a stopping policy that can stop inference in a way that further enhances the time efficiency of SNNs. The stopping time can serve as an indicator to whether a decision is correct, akin to the reaction time in animal behavior experiments. By integrating stochasticity into decision making, this study opens up new possibilities to explore the capabilities of SNNs and advance SNNs and their applications in complex decision-making scenarios where model performance is limited.
Journal Articles
Self-Organization of Nonlinearly Coupled Neural Fluctuations Into Synergistic Population Codes
Open AccessPublisher: Journals Gateway
Neural Computation (2023) 35 (11): 1820–1849.
Published: 10 October 2023
FIGURES
| View All (6)
Abstract
View articletitled, Self-Organization of Nonlinearly Coupled Neural Fluctuations Into Synergistic Population Codes
View
PDF
for article titled, Self-Organization of Nonlinearly Coupled Neural Fluctuations Into Synergistic Population Codes
Neural activity in the brain exhibits correlated fluctuations that may strongly influence the properties of neural population coding. However, how such correlated neural fluctuations may arise from the intrinsic neural circuit dynamics and subsequently affect the computational properties of neural population activity remains poorly understood. The main difficulty lies in resolving the nonlinear coupling between correlated fluctuations with the overall dynamics of the system. In this study, we investigate the emergence of synergistic neural population codes from the intrinsic dynamics of correlated neural fluctuations in a neural circuit model capturing realistic nonlinear noise coupling of spiking neurons. We show that a rich repertoire of spatial correlation patterns naturally emerges in a bump attractor network and further reveals the dynamical regime under which the interplay between differential and noise correlations leads to synergistic codes. Moreover, we find that negative correlations may induce stable bound states between two bumps, a phenomenon previously unobserved in firing rate models. These noise-induced effects of bump attractors lead to a number of computational advantages including enhanced working memory capacity and efficient spatiotemporal multiplexing and can account for a range of cognitive and behavioral phenomena related to working memory. This study offers a dynamical approach to investigating realistic correlated neural fluctuations and insights to their roles in cortical computations.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (11): 3079–3105.
Published: 01 November 2009
FIGURES
| View All (8)
Abstract
View articletitled, Maximum Likelihood Decoding of Neuronal Inputs from an Interspike
Interval Distribution
View
PDF
for article titled, Maximum Likelihood Decoding of Neuronal Inputs from an Interspike
Interval Distribution
An expression for the probability distribution of the interspike interval of a leaky integrate-and-fire (LIF) model neuron is rigorously derived, based on recent theoretical developments in the theory of stochastic processes. This enables us to find for the first time a way of developing maximum likelihood estimates (MLE) of the input information (e.g., afferent rate and variance) for an LIF neuron from a set of recorded spike trains. Dynamic inputs to pools of LIF neurons both with and without interactions are efficiently and reliably decoded by applying the MLE, even within time windows as short as 25 msec.
Includes: Supplementary data
Journal Articles
Impact of Geometrical Structures on the Output of Neuronal Models: A Theoretical and Numerical Analysis
UnavailablePublisher: Journals Gateway
Neural Computation (2002) 14 (3): 621–640.
Published: 01 March 2002
Abstract
View articletitled, Impact of Geometrical Structures on the Output of Neuronal Models: A Theoretical and Numerical Analysis
View
PDF
for article titled, Impact of Geometrical Structures on the Output of Neuronal Models: A Theoretical and Numerical Analysis
What is the difference between the efferent spike train of a neuron with a large soma versus that of a neuron with a small soma? We propose an analytical method called the decoupling approach to tackle the problem. Two limiting cases—the soma is much smaller than the dendrite or vica versa—are theoretically investigated. For both the two-compartment integrate-and-fire model and Pinsky-Rinzel model, we show, both theoretically and numerically, that the smaller the soma is, the faster and the more irregularly the neuron fires. We further conclude, in terms of numerical simulations, that cells falling in between the two limiting cases form a continuum with respect to their firing properties (mean firing time and coefficient of variation of inter-spike intervals).
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (3): 671–692.
Published: 01 March 2000
Abstract
View articletitled, Impact of Correlated Inputs on the Output of the Integrate-and-Fire Model
View
PDF
for article titled, Impact of Correlated Inputs on the Output of the Integrate-and-Fire Model
For the integrate-and-fire model with or without reversal potentials, we consider how correlated inputs affect the variability of cellular output. For both models, the variability of efferent spike trains measured by coefficient of variation (CV) of the interspike interval is a nondecreasing function of input correlation. When the correlation coefficient is greater than 0.09, the CV of the integrate-and-fire model without reversal potentials is always above 0.5, no matter how strong the inhibitory inputs. When the correlation coefficient is greater than 0.05, CV for the integrate- and-fire model with reversal potentials is always above 0.5, independent of the strength of the inhibitory inputs. Under a given condition on correlation coefficients, we find that correlated Poisson processes can be decomposed into independent Poisson processes. We also develop a novel method to estimate the distribution density of the first passage time of the integrate-and-fire model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (1): 189–213.
Published: 01 January 1998
Abstract
View articletitled, Fixed-Point Attractor Analysis for a Class of Neurodynamics
View
PDF
for article titled, Fixed-Point Attractor Analysis for a Class of Neurodynamics
Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space, we present a systematic and feasible method for analyzing the whole spectrum of attractors—all-saturated, all-but-one-saturated, all-but-twosaturated, and so on—of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument that claims, under a mild condition, that only all-saturated or all but-one-saturated attractors are observable for the neurodynamics. For any given all-saturated configuration (all-but-one-saturated configuration ) the article shows how to construct an exact parameter region R ( )( ( )) such that if and only if the parameters fall within R ( )( ( )), then ( ) is an attractor (a fixed point) of the dynamics. The parameter region for an all-saturated fixed-point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one-saturated fixed point, it is sensitive to the input-output characteristic. Based on a similar idea, the role of weight normalization realized by a saturated sigmoidal function in competitive learning is discussed. A necessary and sufficient condition is provided to distinguish two kinds of competitive learning: stable competitive learning with the weight vectors representing extremes of input space and being fixed-point attractors, and unstable competitive learning. We apply our results to Linsker's model and (using extreme value theory in statistics) the Hopfield model and obtain some novel results on these two models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (1): 43–49.
Published: 01 January 1997
Abstract
View articletitled, Lyapunov Functions for Neural Nets with Nondifferentiable Input-Output Characteristics
View
PDF
for article titled, Lyapunov Functions for Neural Nets with Nondifferentiable Input-Output Characteristics
I construct Lyapunov functions for asynchronous dynamics and synch ronous dynamics of neural networks with nondifferentiable input-output characteristics.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (5): 1003–1019.
Published: 01 July 1996
Abstract
View articletitled, On Neurodynamics with Limiter Function and Linsker's Developmental Model
View
PDF
for article titled, On Neurodynamics with Limiter Function and Linsker's Developmental Model
The limiter function is used in many learning and retrieval models as the constraint controlling the magnitude of the weight or state vectors. In this paper, we developed a new method to relate the set of saturated fixed points to the set of system parameters of the models that use the limiter function, and then, as a case study, applied this method to Linsker's Hebbian learning network. We derived a necessary and sufficient condition to test whether a given saturated weight or state vector is stable or not for any given set of system parameters, and used this condition to determine the whole regime in the parameter space over which the given state is stable. This approach allows us to investigate the relative stability of the major receptive fields reported in Linsker's simulations, and to demonstrate the crucial role played by the synaptic density functions.