Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-20 of 58
Notes
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
1
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (3): 656–663.
Published: 01 March 2011
FIGURES
Abstract
View article
PDF
In this note, we demonstrate that the high firing irregularity produced by the leaky integrate-and-fire neuron with the partial somatic reset mechanism, which has been shown to be the most likely candidate to reflect the mechanism used in the brain for reproducing the highly irregular cortical neuron firing at high rates (Bugmann, Christodoulou, & Taylor, 1997 ; Christodoulou & Bugmann, 2001 ), enhances learning. More specifically, it enhances reward-modulated spike-timing-dependent plasticity with eligibility trace when used in spiking neural networks, as shown by the results when tested in the simple benchmark problem of XOR, as well as in a complex multiagent setting task.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (3): 651–655.
Published: 01 March 2011
FIGURES
Abstract
View article
PDF
The pattern of spikes recorded from place cells in the rodent hippocampus is strongly modulated by both the spatial location in the environment and the theta rhythm. The phases of the spikes in the theta cycle advance during movement through the place field. Recently intracellular recordings from hippocampal neurons (Harvey, Collman, Dombeck, & Tank, 2009 ) showed an increase in the amplitude of membrane potential oscillations inside the place field, which was interpreted as evidence that an intracellular mechanism caused phase precession. Here we show that an existing network model of the hippocampus (Tsodyks, Skaggs, Sejnowski, & McNaughton, 1996 ) can equally reproduce this and other aspects of the intracellular recordings, which suggests that new experiments are needed to distinguish the contributions of intracellular and network mechanisms to phase precession.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (3): 664–673.
Published: 01 March 2011
FIGURES
Abstract
View article
PDF
Optimization based on k -step contrastive divergence (CD) has become a common way to train restricted Boltzmann machines (RBMs). The k -step CD is a biased estimator of the log-likelihood gradient relying on Gibbs sampling. We derive a new upper bound for this bias. Its magnitude depends on k , the number of variables in the RBM, and the maximum change in energy that can be produced by changing a single variable. The last reflects the dependence on the absolute values of the RBM parameters. The magnitude of the bias is also affected by the distance in variation between the modeled distribution and the starting distribution of the Gibbs chain.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1468–1472.
Published: 01 June 2010
Abstract
View article
PDF
Recently van Elburg and van Ooyen ( 2009 ) published a generalization of the event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory currents and double exponential inhibitory synaptic currents, introduced by Carnevale and Hines. In the paper, it was shown that the constraints on the synaptic time constants imposed by the Newton-Raphson iteration scheme, can be relaxed. In this note, we show that according to the results published in D'Haene, Schrauwen, Van Campenhout, and Stroobandt ( 2009 ), a further generalization is possible, eliminating any constraint on the time constants. We also demonstrate that in fact, a wide range of linear neuron models can be efficiently simulated with this computation scheme, including neuron models mimicking complex neuronal behavior. These results can change the way complex neuronal spiking behavior is modeled: instead of highly nonlinear neuron models with few state variables, it is possible to efficiently simulate linear models with a large number of state variables.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1445–1467.
Published: 01 June 2010
FIGURES
Abstract
View article
PDF
We present an integrative formalism of mutual information expansion, the general Poisson exact breakdown, which explicitly evaluates the informational contribution of correlations in the spike counts both between and within neurons. The formalism was validated on simulated data and applied to real neurons recorded from the rat somatosensory cortex. From the general Poisson exact breakdown, a considerable number of mutual information measures introduced in the neural computation literature can be directly derived, including the exact breakdown (Pola, Thiele, Hoffmann, & Panzeri, 2003 ), the Poisson exact breakdown (Scaglione, Foffani, Scannella, Cerutti, & Moxon, 2008 ) the synergy and redundancy between neurons (Schneidman, Bialek, & Berry, 2003 ), and the information lost by an optimal decoder that assumes the absence of correlations between neurons (Nirenberg & Latham, 2003 ; Pola et al., 2003 ). The general Poisson exact breakdown thus offers a convenient set of building blocks for studying the role of correlations in population codes.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (12): 3216–3225.
Published: 01 December 2007
Abstract
View article
PDF
Izhikevich (2003) proposed a new canonical neuron model of spike generation. The model was surprisingly simple yet able to accurately replicate the firing patterns of different types of cortical cell. Here, we derive a solution method that allows efficient simulation of the model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (11): 2865–2870.
Published: 01 November 2007
Abstract
View article
PDF
Compartmental models provide a major source of insight into the information processing functions of single neurons. Over the past 15 years, one of the most widely used neuronal morphologies has been the cell called "j4," a layer 5 pyramidal cell from cat visual cortex originally described in Douglas, Martin, and Whitteridge (1991). The cell has since appeared in at least 28 published compartmental modeling studies, including several in this journal. In recently examining why we could not reproduce certain in vitro data involving the attenuation of signals originating in distal basal dendrites, we discovered that pronounced fluctuations in the diameter measurements of j4 lead to a bottlenecking effect that increases distal input resistances and significantly reduces voltage transfer between distal sites and the cell body. Upon smoothing these diameter fluctuations, bringing j4 more in line with other reconstructions of layer 5 pyramidal neurons, we found that the attenuation of steady-state voltage signals traveling to the cell body V distal / V soma was reduced by 60% at some locations in some branches (corresponding to a 2.5-fold increase in the voltage response at the soma for the same distal depolarization) and by 30% on average (corresponding to a 45% increase in somatic response). Changes of this magnitude could lead to different outcomes in some types of compartmental modeling studies. A smoothed version of the j4 morphology is available online at http://lnc.usc.edu/j4-smooth/ .
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (12): 2917–2922.
Published: 01 December 2006
Abstract
View article
PDF
Different analytical expressions for the membrane potential distribution of membranes subject to synaptic noise have been proposed and can be very helpful in analyzing experimental data. However, all of these expressions are either approximations or limit cases, and it is not clear how they compare and which expression should be used in a given situation. In this note, we provide a comparison of the different approximations available, with an aim of delineating which expression is most suitable for analyzing experimental data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 1903–1910.
Published: 01 September 2005
Abstract
View article
PDF
We develop the general, multivariate case of the Edgeworth approximation of differential entropy and show that it can be more accurate than the nearest-neighbor method in the multivariate case and that it scales better with sample size. Furthermore, we introduce mutual information estimation as an application.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (8): 1700–1705.
Published: 01 August 2005
Abstract
View article
PDF
Sensitivity to image motion contrast, that is, the relative motion between different parts of the visual field, is a common and computationally important property of many neurons in the visual pathways of vertebrates. Here we illustrate that, as a classification problem, motion contrast detection is linearly nonseparable. In order to do so, we prove a theorem stating a sufficient condition for linear nonseparability. We argue that nonlinear combinations of local measurements of velocity at different locations and times are needed in order to solve the motion contrast problem.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (5): 991–995.
Published: 01 May 2005
Abstract
View article
PDF
Stability of intrinsic electrical activity and modulation of input-output gain are both important for neuronal information processing. It is there-fore of interest to define biologically plausible parameters that allow these two features to coexist. Recent experiments indicate that in some biological neurons, the stability of spontaneous firing can arise from coregulated expression of the electrophysiologically opposing I A and I H currents. Here, I show that such balanced changes in I A and I H dramatically alter the slope of the relationship between the firing rate and driving current in a Hodgkin-Huxley-type model neuron. Concerted changes in I A and I H can thus control neuronal gain while preserving intrinsic activity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (3): 503–513.
Published: 01 March 2005
Abstract
View article
PDF
We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (2): 321–330.
Published: 01 February 2005
Abstract
View article
PDF
Stone's method is one of the novel approaches to the blind source separation (BSS) problem and is based on Stone's conjecture. However, this conjecture has not been proved. We present a simple simulation to demonstrate that Stone's conjecture is incorrect. We then modify Stone's conjecture and prove this modified conjecture as a theorem, which can be used a basis for BSS algorithms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (1): 1–6.
Published: 01 January 2005
Abstract
View article
PDF
In many areas of data modeling, observations at different locations (e.g., time frames or pixel locations) are augmented by differences of nearby observations (e.g., δ features in speech recognition, Gabor jets in image analysis). These augmented observations are then often modeled as being independent. How can this make sense? We provide two interpretations, showing (1) that the likelihood of data generated from an autoregressive process can be computed in terms of “independent” augmented observations and (2) that the augmented observations can be given a coherent treatment in terms of the products of experts model (Hinton, 1999).
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (9): 1763–1768.
Published: 01 September 2004
Abstract
View article
PDF
An applied problem is discussed in which two nested psychological models of retention are compared using minimum description length (MDL). The standard Fisher information approximation to the normalized maximum likelihood is calculated for these two models, with the result that the full model is assigned a smaller complexity, even for moderately large samples. A geometric interpretation for this behavior is considered, along with its practical implications.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (7): 1325–1343.
Published: 01 July 2004
Abstract
View article
PDF
A Bayesian method is developed for estimating neural responses to stimuli, using likelihood functions incorporating the assumption that spike trains follow either pure Poisson statistics or Poisson statistics with a refractory period. The Bayesian and standard estimates of the mean and variance of responses are similar and asymptotically converge as the size of the data sample increases. However, the Bayesian estimate of the variance of the variance is much lower. This allows the Bayesian method to provide more precise interval estimates of responses. Sensitivity of the Bayesian method to the Poisson assumption was tested by conducting simulations perturbing the Poisson spike trains with noise. This did not affect Bayesian estimates of mean and variance to a significant degree, indicating that the Bayesian method is robust. The Bayesian estimates were less affected by the presence of noise than estimates provided by the standard method.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (4): 665–672.
Published: 01 April 2004
Abstract
View article
PDF
The product-moment correlation coefficient is often viewed as a natural measure of dependence. However, this equivalence applies only in the context of elliptical distributions, most commonly the multivariate gaussian, where linear correlation indeed sufficiently describes the underlying dependence structure. Should the true probability distributions deviate from those with elliptical contours, linear correlation may convey misleading information on the actual underlying dependencies. It is often the case that probability distributions other than the gaussian distribution are necessary to properly capture the stochastic nature of single neurons, which as a consequence greatly complicates the construction of a flexible model of covariance. We show how arbitrary probability densities can be coupled to allow greater flexibility in the construction of multivariate neural population models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (3): 477–489.
Published: 01 March 2004
Abstract
View article
PDF
Frequency coding is considered one of the most common coding strategies employed by neural systems. This fact leads, in experiments as well as in theoretical studies, to construction of so-called transfer functions, where the output firing frequency is plotted against the input intensity. The term firing frequency can be understood differently in different contexts. Basically, it means that the number of spikes over an interval of preselected length is counted and then divided by the length of the interval, but due to the obvious limitations, the length of observation cannot be arbitrarily long. Then firing frequency is defined as reciprocal to the mean interspike interval. In parallel, an instantaneous firing frequency can be defined as reciprocal to the length of current interspike interval, and by taking a mean of these, the definition can be extended to introduce the mean instantaneous firing frequency. All of these definitions of firing frequency are compared in an effort to contribute to a better understanding of the input-output properties of a neuron.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (3): 539–547.
Published: 01 March 2003
Abstract
View article
PDF
We derive analytically the solution for the output rate of the ideal coincidence detector. The solution is for an arbitrary number of input spike trains with identical binomial count distributions (which includes Poisson statistics as a special case) and identical arbitrary pairwise cross-correlations, from zero correlation (independent processes) to complete correlation (identical processes).
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (9): 2039–2041.
Published: 01 September 2002
Abstract
View article
PDF
In response to Rodriguez's recent article (2001), we compare the performance of simple recurrent nets and long short-term memory recurrent nets on context-free and context-sensitive languages.
1