Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Ralph Linsker
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
A Local Learning Rule That Enables Information Maximization for Arbitrary Input Distributions
UnavailablePublisher: Journals Gateway
Neural Computation (1997) 9 (8): 1661–1665.
Published: 15 November 1997
Abstract
View articletitled, A Local Learning Rule That Enables Information Maximization for Arbitrary Input Distributions
View
PDF
for article titled, A Local Learning Rule That Enables Information Maximization for Arbitrary Input Distributions
This note presents a local learning rule that enables a network to maximize the mutual information between input and output vectors. The network's output units may be nonlinear, and the distribution of input vectors is arbitrary. The local algorithm also serves to compute the inverse C −1 of an arbitrary square connection weight matrix.
Journal Articles
Local Synaptic Learning Rules Suffice to Maximize Mutual Information in a Linear Network
UnavailablePublisher: Journals Gateway
Neural Computation (1992) 4 (5): 691–702.
Published: 01 September 1992
Abstract
View articletitled, Local Synaptic Learning Rules Suffice to Maximize Mutual Information in a Linear Network
View
PDF
for article titled, Local Synaptic Learning Rules Suffice to Maximize Mutual Information in a Linear Network
A network that develops to maximize the mutual information between its output and the signal portion of its input (which is admixed with noise) is useful for extracting salient input features, and may provide a model for aspects of biological neural network function. I describe a local synaptic Learning rule that performs stochastic gradient ascent in this information-theoretic quantity, for the case in which the input-output mapping is linear and the input signal and noise are multivariate gaussian. Feedforward connection strengths are modified by a Hebbian rule during a "learning" phase in which examples of input signal plus noise are presented to the network, and by an anti-Hebbian rule during an "unlearning" phase in which examples of noise alone are presented. Each recurrent lateral connection has two values of connection strength, one for each phase; these values are updated by an anti-Hebbian rule.
Journal Articles
How to Generate Ordered Maps by Maximizing the Mutual Information between Input and Output Signals
UnavailablePublisher: Journals Gateway
Neural Computation (1989) 1 (3): 402–411.
Published: 01 September 1989
Abstract
View articletitled, How to Generate Ordered Maps by Maximizing the Mutual Information between Input and Output Signals
View
PDF
for article titled, How to Generate Ordered Maps by Maximizing the Mutual Information between Input and Output Signals
A learning rule that performs gradient ascent in the average mutual information between input and an output signal is derived for a system having feedforward and lateral interactions. Several processes emerge as components of this learning rule: Hebb-like modification, and cooperation and competition among processing nodes. Topographic map formation is demonstrated using the learning rule. An analytic expression relating the average mutual information to the response properties of nodes and their geometric arrangement is derived in certain cases. This yields a relation between the local map magnification factor and the probability distribution in the input space. The results provide new links between unsupervised learning and information-theoretic optimization in a system whose properties are biologically motivated.