Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Cengiz Pehlevan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (5): 1136–1142.
Published: 15 April 2022
Abstract
View articletitled, On Neural Network Kernels and the Storage Capacity Problem
View
PDF
for article titled, On Neural Network Kernels and the Storage Capacity Problem
In this short note, we reify the connection between work on the storage capacity problem in wide two-layer treelike neural networks and the rapidly growing body of literature on kernel limits of wide neural networks. Concretely, we observe that the “effective order parameter” studied in the statistical mechanics literature is exactly equivalent to the infinite-width neural network gaussian process kernel. This correspondence connects the expressivity and trainability of wide two-layer neural networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (5): 1300–1328.
Published: 13 April 2021
FIGURES
| View All (4)
Abstract
View articletitled, Contrastive Similarity Matching for Supervised Learning
View
PDF
for article titled, Contrastive Similarity Matching for Supervised Learning
We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (1): 84–124.
Published: 01 January 2018
FIGURES
| View All (5)
Abstract
View articletitled, Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks?
View
PDF
for article titled, Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks?
Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (11): 2925–2954.
Published: 01 November 2017
FIGURES
| View All (8)
Abstract
View articletitled, Blind Nonnegative Source Separation Using Biological Neural Networks
View
PDF
for article titled, Blind Nonnegative Source Separation Using Biological Neural Networks
Blind source separation—the extraction of independent sources from a mixture—is an important problem for both artificial and natural signal processing. Here, we address a special case of this problem when sources (but not the mixing matrix) are known to be nonnegative—for example, due to the physical nature of the sources. We search for the solution to this problem that can be implemented using biologically plausible neural networks. Specifically, we consider the online setting where the data set is streamed to a neural network. The novelty of our approach is that we formulate blind nonnegative source separation as a similarity matching problem and derive neural networks from the similarity matching objective. Importantly, synaptic weights in our networks are updated according to biologically plausible local learning rules.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (7): 1461–1495.
Published: 01 July 2015
FIGURES
| View All (5)
Abstract
View articletitled, A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data
View
PDF
for article titled, A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data
Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.