Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Barak A. Pearlmutter
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (6): 1622–1641.
Published: 01 June 2009
FIGURES
Abstract
View article
PDF
We propose that the critical function of sleep is to prevent uncontrolled neuronal feedback while allowing rapid responses and prolonged retention of short-term memories. Through learning, the brain is tuned to react optimally to environmental challenges. Optimal behavior often requires rapid responses and the prolonged retention of short-term memories. At a neuronal level, these correspond to recurrent activity in local networks. Unfortunately, when a network exhibits recurrent activity, small changes in the parameters or conditions can lead to runaway oscillations. Thus, the very changes that improve the processing performance of the network can put it at risk of runaway oscillation. To prevent this, stimulus-dependent network changes should be permitted only when there is a margin of safety around the current network parameters. We propose that the essential role of sleep is to establish this margin by exposing the network to a variety of inputs, monitoring for erratic behavior, and adjusting the parameters. When sleep is not possible, an emergency mechanism must come into play, preventing runaway behavior at the expense of processing efficiency. This is tiredness.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (5): 1295–1312.
Published: 01 May 2007
Abstract
View article
PDF
Neuronal activity in response to a fixed stimulus has been shown to change as a function of attentional state, implying that the neural code also changes with attention. We propose an information-theoretic account of such modulation: that the nervous system adapts to optimally encode sensory stimuli while taking into account the changing relevance of different features. We show using computer simulation that such modulation emerges in a coding system informed about the uneven relevance of the input features. We present a simple feedforward model that learns a covert attention mechanism, given input patterns and coding fidelity requirements. After optimization, the system gains the ability to reorganize its computational resources (and coding strategy) depending on the incoming attentional signal, without the need of multiplicative interaction or explicit gating mechanisms between units. The modulation of activity for different attentional states matches that observed in a variety of selective attention experiments. This model predicts that the shape of the attentional modulation function can be strongly stimulus dependent. The general principle presented here accounts for attentional modulation of neural activity without relying on special-purpose architectural mechanisms dedicated to attention. This principle applies to different attentional goals, and its implications are relevant for all modalities in which attentional phenomena are observed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (8): 1827–1858.
Published: 01 August 2002
Abstract
View article
PDF
We applied second-order blind identification (SOBI), an independent component analysis method, to MEG data collected during cognitive tasks. We explored SOBI's ability to help isolate underlying neuronal sources with relatively poor signal-to-noise ratios, allowing their identification and localization. We compare localization of the SOBI-separated components to localization from unprocessed sensor signals, using an equivalent current dipole modeling method. For visual and somatosensory modalities, SOBI preprocessing resulted in components that can be localized to physiologically and anatomically meaningful locations. Furthermore, this preprocessing allowed the detection of neuronal source activations that were otherwise undetectable. This increased probability of neuronal source detection and localization can be particularly beneficial for MEG studies of higher-level cognitive functions, which often
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (4): 863–882.
Published: 01 April 2001
Abstract
View article
PDF
The blind source separation problem is to extract the underlying source signals from a set of linear mixtures, where the mixing matrix is unknown. This situation is common in acoustics, radio, medical signal and image processing, hyperspectral imaging, and other areas. We suggest a two-stage separation process: a priori selection of a possibly overcomplete signal dictionary (for instance, a wavelet frame or a learned dictionary) in which the sources are assumed to be sparsely representable, followed by unmixing the sources by exploiting the their sparse representability. We consider the general case of more sources than mixtures, but also derive a more efficient algorithm in the case of a nonovercomplete dictionary and an equal numbers of sources and mixtures. Experiments with artificial signals and musical sounds demonstrate significantly better separation than other known techniques.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (3): 611–624.
Published: 01 April 1996
Abstract
View article
PDF
We compute the VC dimension of a leaky integrate-and-fire neuron model. The VC dimension quantifies the ability of a function class to partition an input pattern space, and can be considered a measure of computational capacity. In this case, the function class is the class of integrate-and-fire models generated by varying the integration time constant T and the threshold θ, the input space they partition is the space of continuous-time signals, and the binary partition is specified by whether or not the model reaches threshold at some specified time. We show that the VC dimension diverges only logarithmically with the input signal bandwidth N . We also extend this approach to arbitrary passive dendritic trees. The main contributions of this work are (1) it offers a novel treatment of computational capacity of this class of dynamic system; and (2) it provides a framework for analyzing the computational capabilities of the dynamic systems defined by networks of spiking neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (4): 706–712.
Published: 01 July 1995
Abstract
View article
PDF
In an isopotential neuron with rapid response, it has been shown that the receptive fields formed by Hebbian synaptic modulation depend on the principal eigenspace of Q (0), the input autocorrelation matrix, where Q ij ( τ ) = 〈ξ i ( τ ) ξ j ( t − T )〉 and ξ i ( t ) is the input to synapse i at time t (Oja 1982). We relax the assumption of isopotentiality, introduce a time-skewed Hebb rule, and find that the dynamics of synaptic evolution are determined by the principal eigenspace of . This matrix is defined by , where K ij ( τ ) is the neuron's voltage response to a unit current injection at synapse j as measured τ seconds later at synapse i , and ψ i ( τ ) is the time course of the opportunity for modulation of synapse i following the arrival of a presynaptic action potential.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (1): 147–160.
Published: 01 January 1994
Abstract
View article
PDF
Just storing the Hessian H (the matrix of second derivatives δ 2 E /δ w i δ w j of the error E with respect to each pair of weights) of a large neural network is difficult. Since a common use of a large matrix like H is to compute its product with various vectors, we derive a technique that directly calculates Hv, where v is an arbitrary vector. To calculate Hv, we first define a differential operator R v { f (w)} = (δ/δr) f (w + r v)| r =0, note that R v {▽w} = Hv and R v {w} = v, and then apply R v {·} to the equations used to compute ▽ w . The result is an exact and numerically stable procedure for computing Hv, which takes about as much computation, and is about as local, as a gradient evaluation. We then apply the technique to a one pass gradient calculation algorithm (backpropagation), a relaxation gradient calculation algorithm (recurrent backpropagation), and two stochastic gradient calculation algorithms (Boltzmann machines and weight perturbation). Finally, we show that this technique can be used at the heart of many iterative techniques for computing various properties of H, obviating any need to calculate the full Hessian.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (2): 263–269.
Published: 01 June 1989
Abstract
View article
PDF
Many neural network learning procedures compute gradients of the errors on the output layer of units after they have settled to their final values. We describe a procedure for finding ∂E/∂w ij , where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and w ij are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E. Simulations in which networks are taught to move through limit cycles are shown. This type of recurrent network seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.