Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Anthony J. Bell
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (11): 2991–3009.
Published: 01 November 2009
FIGURES
| View All (5)
Abstract
View article
PDF
A feedforward spiking network represents a nonlinear transformation that maps a set of input spikes to a set of output spikes. This mapping transforms the joint probability distribution of incoming spikes into a joint distribution of output spikes. We present an algorithm for synaptic adaptation that aims to maximize the entropy of this output distribution, thereby creating a model for the joint distribution of the incoming point processes. The learning rule that is derived depends on the precise pre- and postsynaptic spike timings. When trained on correlated spike trains, the network learns to extract independent spike trains, thereby uncovering the underlying statistical structure and creating a more efficient representation of the incoming spike trains.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (6): 1129–1159.
Published: 01 November 1995
Abstract
View article
PDF
We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in "blind" signal processing.