Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Vladimir Itskov
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (11): 2527–2540.
Published: 01 November 2014
FIGURES
| View all 4
Abstract
View articletitled, A No-Go Theorem for One-Layer Feedforward Networks
View
PDF
for article titled, A No-Go Theorem for One-Layer Feedforward Networks
It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of subtractive noise. When we coarsened the notion of combinatorial neural code to keep track of only maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or many-layer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by a subset of the form , where is a polyhedron.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (11): 2858–2903.
Published: 01 November 2013
FIGURES
| View all 23
Abstract
View articletitled, Encoding Binary Neural Codes in Networks of Threshold-Linear Neurons
View
PDF
for article titled, Encoding Binary Neural Codes in Networks of Threshold-Linear Neurons
Networks of neurons in the brain encode preferred patterns of neural activity via their synaptic connections. Despite receiving considerable attention, the precise relationship between network connectivity and encoded patterns is still poorly understood. Here we consider this problem for networks of threshold-linear neurons whose computational function is to learn and store a set of binary patterns (e.g., a neural code ) as “permitted sets” of the network. We introduce a simple encoding rule that selectively turns “on” synapses between neurons that coappear in one or more patterns. The rule uses synapses that are binary , in the sense of having only two states (“on” or “off”), but also heterogeneous , with weights drawn from an underlying synaptic strength matrix S . Our main results precisely describe the stored patterns that result from the encoding rule, including unintended “spurious” states, and give an explicit characterization of the dependence on S . In particular, we find that binary patterns are successfully stored in these networks when the excitatory connections between neurons are geometrically balanced —i.e., they satisfy a set of geometric constraints. Furthermore, we find that certain types of neural codes are natural in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns. Interestingly, many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small fraction of patterns. To obtain our results, we prove new theorems using classical ideas from convex and distance geometry, such as Cayley-Menger determinants, revealing a novel connection between these areas of mathematics and coding properties of neural networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (7): 1891–1925.
Published: 01 July 2013
FIGURES
| View all 22
Abstract
View articletitled, Combinatorial Neural Codes from a Mathematical Coding Theory Perspective
View
PDF
for article titled, Combinatorial Neural Codes from a Mathematical Coding Theory Perspective
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Journal Articles
Valuations for Spike Train Prediction
UnavailablePublisher: Journals Gateway
Neural Computation (2008) 20 (3): 644–667.
Published: 01 March 2008
Abstract
View articletitled, Valuations for Spike Train Prediction
View
PDF
for article titled, Valuations for Spike Train Prediction
The ultimate product of an electrophysiology experiment is often a decision on which biological hypothesis or model best explains the observed data. We outline a paradigm designed for comparison of different models, which we refer to as spike train prediction . A key ingredient of this paradigm is a prediction quality valuation that estimates how close a predicted conditional intensity function is to an actual observed spike train. Although a valuation based on log likelihood (L) is most natural, it has various complications in this context. We propose that a quadratic valuation (Q) can be used as an alternative to L. Q shares some important theoretical properties with L, including consistency, and the two valuations perform similarly on simulated and experimental data. Moreover, Q is more robust than L, and optimization with Q can dramatically improve computational efficiency. We illustrate the utility of Q for comparing models of peer prediction, where it can be computed directly from cross-correlograms. Although Q does not have a straightforward probabilistic interpretation, Q is essentially given by Euclidean distance.