Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Wei Wei
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (4): 652–666.
Published: 01 April 2016
FIGURES
| View All (31)
Abstract
View article
PDF
Ramping neuronal activity refers to spiking activity with a rate that increases quasi-linearly over time. It has been observed in multiple cortical areas and is correlated with evidence accumulation processes or timing. In this work, we investigated the downstream effect of ramping neuronal activity through synapses that display short-term facilitation (STF) or depression (STD). We obtained an analytical result for a synapse driven by deterministic linear ramping input that exhibits pure STF or STD and numerically investigated the general case when a synapse displays both STF and STD. We show that the analytical deterministic solution gives an accurate description of the averaging synaptic activation of many inputs converging onto a postsynaptic neuron, even when fluctuations in the ramping input are strong. Activation of a synapse with STF shows an initial cubical increase with time, followed by a linear ramping similar to a synapse without STF. Activation of a synapse with STD grows in time to a maximum before falling and reaching a plateau, and this steady state is independent of the slope of the ramping input. For a synapse displaying both STF and STD, an increase in the depression time constant from a value much smaller than the facilitation time constant to a value much larger than leads to a transition from facilitation dominance to depression dominance. Therefore, our work provides insights into the impact of ramping neuronal activity on downstream neurons through synapses that display short-term plasticity. In a perceptual decision-making process, ramping activity has been observed in the parietal and prefrontal cortices, with a slope that decreases with task difficulty. Our work predicts that neurons downstream from such a decision circuit could instead display a firing plateau independent of the task difficulty, provided that the synaptic connection is endowed with short-term depression.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (3): 872–889.
Published: 01 March 2009
FIGURES
| View All (9)
Abstract
View article
PDF
We propose an adaptive improved natural gradient algorithm for blind separation of independent sources. First, inspired by the well-known backpropagation algorithm, we incorporate a momentum term into the natural gradient learning process to accelerate the convergence rate and improve the stability. Then an estimation function for the adaptation of the separation model is obtained to adaptively control a step-size parameter and a momentum factor. The proposed natural gradient algorithm with variable step-size parameter and variable momentum factor is therefore particularly well suited to blind source separation in a time-varying environment, such as an abruptly changing mixing matrix or signal power. The expected improvement in the convergence speed, stability, and tracking ability of the proposed algorithm is demonstrated by extensive simulation results in both time-invariant and time-varying environments. The ability of the proposed algorithm to separate extremely weak or badly scaled sources is also verified. In addition, simulation results show that the proposed algorithm is suitable for separating mixtures of many sources (e.g., the number of sources is 10) in the complete case.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (5): 1235–1248.
Published: 01 July 1999
Abstract
View article
PDF
Although the outputs of neural network classifiers are often considered to be estimates of posterior class probabilities, the literature that assesses the calibration accuracy of these estimates illustrates that practical networks often fall far short of being ideal estimators. The theorems used to justify treating network outputs as good posterior estimates are based on several assumptions: that the network is sufficiently complex to model the posterior distribution accurately, that there are sufficient training data to specify the network, and that the optimization routine is capable of finding the global minimum of the cost function. Any or all of these assumptions may be violated in practice. This article does three things. First, we apply a simple, previously used histogram technique to assess graphically the accuracy of posterior estimates with respect to individual classes. Second, we introduce a simple and fast remapping procedure that transforms network outputs to provide better estimates of posteriors. Third, we use the remapping in a real-world telephone speech recognition system. The remapping results in a 10% reduction of both word-level error rates (from 4.53% to 4.06%) and sentence-level error rates (from 16.38% to 14.69%) on one corpus, and a 29% reduction at sentence-level error (from 6.3% to 4.5%) on another. The remapping required negligible additional overhead (in terms of both parameters and calculations). McNemar's test shows that these levels of improvement are statistically significant.