Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-10 of 10
Si Wu
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (7): 1695–1721.
Published: 01 July 2012
FIGURES
| View All (21)
Abstract
View article
PDF
Descending feedback connections, together with ascending feedforward ones, are the indispensable parts of the sensory pathways in the central nervous system. This study investigates the potential roles of feedback interactions in neural information processing. We consider a two-layer continuous attractor neural network (CANN), in which neurons in the first layer receive feedback inputs from those in the second one. By utilizing the intrinsic property of a CANN, we use a projection method to reduce the dimensionality of the network dynamics significantly. The simplified dynamics allows us to elucidate the effects of feedback modulation analytically. We find that positive feedback enhances the stability of the network state, leading to an improved population decoding performance, whereas negative feedback increases the mobility of the network state, inducing spontaneously moving bumps. For strong, negative feedback interaction, the network response to a moving stimulus can lead the actual stimulus position, achieving an anticipative behavior. The biological implications of these findings are discussed. The simulation results agree well with our theoretical analysis.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (5): 1147–1185.
Published: 01 May 2012
FIGURES
| View All (37)
Abstract
View article
PDF
Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity: short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning and may serve as substrates for neural systems manipulating temporal information on relevant timescales. This study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors: the network that is initially being stimulated to an active state decays to a silent state very slowly on the timescale of STD rather than on that of neuralsignaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (3): 752–792.
Published: 01 March 2010
FIGURES
| View All (10)
Abstract
View article
PDF
Understanding how the dynamics of a neural network is shaped by the network structure and, consequently, how the network structure facilitates the functions implemented by the neural system is at the core of using mathematical models to elucidate brain functions. This study investigates the tracking dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of neuronal recurrent interactions, CANNs can hold a continuous family of stationary states. They form a continuous manifold in which the neural system is neutrally stable. We systematically explore how this property facilitates the tracking performance of a CANN, which is believed to have clear correspondence with brain functions. By using the wave functions of the quantum harmonic oscillator as the basis, we demonstrate how the dynamics of a CANN is decomposed into different motion modes, corresponding to distortions in the amplitude, position, width, or skewness of the network state. We then develop a perturbation approach that utilizes the dominating movement of the network's stationary states in the state space. This method allows us to approximate the network dynamics up to an arbitrary accuracy depending on the order of perturbation used. We quantify the distortions of a gaussian bump during tracking and study their effects on tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable and the reaction time for the network to catch up with an abrupt change in the stimulus.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (4): 994–1025.
Published: 01 April 2008
Abstract
View article
PDF
Continuous attractor is a promising model for describing the encoding of continuous stimuli in neural systems. In a continuous attractor, the stationary states of the neural system form a continuous parameter space, on which the system is neutrally stable. This property enables the neutral system to track time-varying stimuli smoothly, but it also degrades the accuracy of information retrieval, since these stationary states are easily disturbed by external noise. In this work, based on a simple model, we systematically investigate the dynamics and the computational properties of continuous attractors. In order to analyze the dynamics of a large-size network, which is otherwise extremely complicated, we develop a strategy to reduce its dimensionality by utilizing the fact that a continuous attractor can eliminate the noise components perpendicular to the attractor space very quickly. We therefore project the network dynamics onto the tangent of the attractor space and simplify it successfully as a one-dimensional Ornstein-Uhlenbeck process. Based on this simplified model, we investigate (1) the decoding error of a continuous attractor under the driving of external noisy inputs, (2) the tracking speed of a continuous attractor when external stimulus experiences abrupt changes, (3) the neural correlation structure associated with the specific dynamics of a continuous attractor, and (4) the consequence of asymmetric neural correlation on statistical population decoding. The potential implications of these results on our understanding of neural information processing are also discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (10): 2215–2239.
Published: 01 October 2005
Abstract
View article
PDF
Two issues concerning the application of continuous attractors in neural systems are investigated: the computational robustness of continuous attractors with respect to input noises and the implementation of Bayesian online decoding. In a perfect mathematical model for continuous attractors, decoding results for stimuli are highly sensitive to input noises, and this sensitivity is the inevitable consequence of the system's neutral stability. To overcome this shortcoming, we modify the conventional network model by including extra dynamical interactions between neurons. These interactions vary according to the biologically plausible Hebbian learning rule and have the computational role of memorizing and propagating stimulus information accumulated with time. As a result, the new network model responds to the history of external inputs over a period of time, and hence becomes insensitive to short-term fluctuations. Also, since dynamical interactions provide a mechanism to convey the prior knowledge of stimulus, that is, the information of the stimulus presented previously, the network effectively implements online Bayesian inference. This study also reveals some interesting behavior in neural population coding, such as the trade-off between decoding stability and the speed of tracking time-varying stimuli, and the relationship between neural tuning width and the tracking speed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (5): 993–1012.
Published: 01 May 2003
Abstract
View article
PDF
Population coding is a simplified model of distributed information processing in the brain. This study investigates the performance and implementation of a sequential Bayesian decoding (SBD) paradigm in the framework of population coding. In the first step of decoding, when no prior knowledge is available, maximum likelihood inference is used; the result forms the prior knowledge of stimulus for the second step of decoding. Estimates are propagated sequentially to apply maximum a posteriori (MAP) decoding in which prior knowledge for any step is taken from estimates from the previous step. Not only do we analyze the performance of SBD, obtaining the optimal form of prior knowledge that achieves the best estimation result, but we also investigate its possible biological realization, in the sense that all operations are performed by the dynamics of a recurrent network. In order to achieve MAP, a crucial point is to identify a mechanism that propagates prior knowledge. We find that this could be achieved by short-term adaptation of network weights according to the Hebbian learning rule. Simulation results on both constant and time-varying stimulus support the analysis.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (1): 127–142.
Published: 01 January 2003
Abstract
View article
PDF
The stochastic mechanism of synchronous firing in a population of neurons is studied from the point of view of information geometry. Higher-order interactions of neurons, which cannot be reduced to pairwise correlations, are proved to exist in synchronous firing. In a neuron pool where each neuron fires stochastically, the probability distribution q(r) of the activity r , which is the fraction of firing neurons in the pool, is studied. When q(r) has a widespread distribution, in particular, when q(r) has two peaks, the neurons fire synchronously at one time and are quiescent at other times. The mechanism of generating such a probability distribution is interesting because the activity r is concentrated on its mean value when each neuron fires independently, because of the law of large numbers. Even when pairwise interactions, or third-order interactions, exist, the concentration is not resolved. This shows that higher-order interactions are necessary to generate widespread activity distributions. We analyze a simple model in which neurons receive common overlapping inputs and prove that such a model can have a widespread distribution of activity, generating higher-order stochastic interactions.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (5): 999–1026.
Published: 01 May 2002
Abstract
View article
PDF
This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only re-discovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further—wider than p2 times the effective width of the turning function—the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (9): 2031–2047.
Published: 01 September 2001
Abstract
View article
PDF
This study investigates the influence of attention modulation on neural tuning functions. It has been shown in experiments that attention modulation alters neural tuning curves. Attention has been considered at least to serve to resolve limiting capacities and to increase the sensitivity to attended stimulus, while the exact functions of attention are still under debate. Inspired by recent experimental results on attention modulation, we investigate the influence of changes in the height and base rate of the tuning curve on the encoding accuracy, using the Fisher information. Under an assumption of stimulus-conditional independence of neural responses, we derive explicit conditions that determine when the height and base rate should be increased or decreased to improve encoding accuracy. Notably, a decrease in the tuning height and base rate can improve the encoding accuracy in some cases. Our theoretical results can predict the effective size of attention modulation on the neural population with respect to encoding accuracy. We discuss how our method can be used quantitatively to evaluate different aspects of attention function.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (4): 775–797.
Published: 01 April 2001
Abstract
View article
PDF
This study investigates a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known or because a simplified decoding model is preferred for saving computational cost. We consider an unfaithful decoding model that neglects the pair-wise correlation between neuronal activities and prove that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited range. The performance of UMLI is compared with that of the maximum likelihood inference based on the faithful model and that of the center-of-mass decoding method. It turns out that UMLI has advantages of decreasing the computational complexity remarkably and maintaining high-leveldecoding accuracy. Moreover, it can be implemented by a biologically feasible recurrent network (Pouget, Zhang, Deneve, & Latham, 1998). The effect of correlation on the decoding accuracy is also discussed.