Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Carina Curto
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (1): 94–155.
Published: 01 January 2019
FIGURES
| View All (22)
Abstract
View article
PDF
Threshold-linear networks (TLNs) are models of neural networks that consist of simple, perceptron-like neurons and exhibit nonlinear dynamics determined by the network's connectivity. The fixed points of a TLN, including both stable and unstable equilibria, play a critical role in shaping its emergent dynamics. In this work, we provide two novel characterizations for the set of fixed points of a competitive TLN: the first is in terms of a simple sign condition, while the second relies on the concept of domination. We apply these results to a special family of TLNs, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. This leads us to prove a series of graph rules that enable one to determine fixed points of a CTLN by analyzing the underlying graph. In addition, we study larger networks composed of smaller building block subnetworks and prove several theorems relating the fixed points of the full network to those of its components. Our results provide the foundation for a kind of graphical calculus to infer features of the dynamics from a network's connectivity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (12): 2825–2852.
Published: 01 December 2016
FIGURES
| View All (16)
Abstract
View article
PDF
Threshold-linear networks are a common class of firing rate models that describe recurrent interactions among neurons. Unlike their linear counterparts, these networks generically possess multiple stable fixed points (steady states), making them viable candidates for memory encoding and retrieval. In this work, we characterize stable fixed points of general threshold-linear networks with constant external drive and discover constraints on the coexistence of fixed points involving different subsets of active neurons. In the case of symmetric networks, we prove the following antichain property: if a set of neurons is the support of a stable fixed point, then no proper subset or superset of can support a stable fixed point. Symmetric threshold-linear networks thus appear to be well suited for pattern completion, since the dynamics are guaranteed not to get stuck in a subset or superset of a stored pattern. We also show that for any graph G, we can construct a network whose stable fixed points correspond precisely to the maximal cliques of G. As an application, we design network decoders for place field codes and demonstrate their efficacy for error correction and pattern completion. The proofs of our main results build on the theory of permitted sets in threshold-linear networks, including recently developed connections to classical distance geometry.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (11): 2858–2903.
Published: 01 November 2013
FIGURES
| View All (23)
Abstract
View article
PDF
Networks of neurons in the brain encode preferred patterns of neural activity via their synaptic connections. Despite receiving considerable attention, the precise relationship between network connectivity and encoded patterns is still poorly understood. Here we consider this problem for networks of threshold-linear neurons whose computational function is to learn and store a set of binary patterns (e.g., a neural code ) as “permitted sets” of the network. We introduce a simple encoding rule that selectively turns “on” synapses between neurons that coappear in one or more patterns. The rule uses synapses that are binary , in the sense of having only two states (“on” or “off”), but also heterogeneous , with weights drawn from an underlying synaptic strength matrix S . Our main results precisely describe the stored patterns that result from the encoding rule, including unintended “spurious” states, and give an explicit characterization of the dependence on S . In particular, we find that binary patterns are successfully stored in these networks when the excitatory connections between neurons are geometrically balanced —i.e., they satisfy a set of geometric constraints. Furthermore, we find that certain types of neural codes are natural in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns. Interestingly, many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small fraction of patterns. To obtain our results, we prove new theorems using classical ideas from convex and distance geometry, such as Cayley-Menger determinants, revealing a novel connection between these areas of mathematics and coding properties of neural networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (7): 1891–1925.
Published: 01 July 2013
FIGURES
| View All (22)
Abstract
View article
PDF
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (3): 644–667.
Published: 01 March 2008
Abstract
View article
PDF
The ultimate product of an electrophysiology experiment is often a decision on which biological hypothesis or model best explains the observed data. We outline a paradigm designed for comparison of different models, which we refer to as spike train prediction . A key ingredient of this paradigm is a prediction quality valuation that estimates how close a predicted conditional intensity function is to an actual observed spike train. Although a valuation based on log likelihood (L) is most natural, it has various complications in this context. We propose that a quadratic valuation (Q) can be used as an alternative to L. Q shares some important theoretical properties with L, including consistency, and the two valuations perform similarly on simulated and experimental data. Moreover, Q is more robust than L, and optimization with Q can dramatically improve computational efficiency. We illustrate the utility of Q for comparing models of peer prediction, where it can be computed directly from cross-correlograms. Although Q does not have a straightforward probabilistic interpretation, Q is essentially given by Euclidean distance.