Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Katherine Morrison
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (1): 94–155.
Published: 01 January 2019
FIGURES
| View All (22)
Abstract
View article
PDF
Threshold-linear networks (TLNs) are models of neural networks that consist of simple, perceptron-like neurons and exhibit nonlinear dynamics determined by the network's connectivity. The fixed points of a TLN, including both stable and unstable equilibria, play a critical role in shaping its emergent dynamics. In this work, we provide two novel characterizations for the set of fixed points of a competitive TLN: the first is in terms of a simple sign condition, while the second relies on the concept of domination. We apply these results to a special family of TLNs, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. This leads us to prove a series of graph rules that enable one to determine fixed points of a CTLN by analyzing the underlying graph. In addition, we study larger networks composed of smaller building block subnetworks and prove several theorems relating the fixed points of the full network to those of its components. Our results provide the foundation for a kind of graphical calculus to infer features of the dynamics from a network's connectivity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (12): 2825–2852.
Published: 01 December 2016
FIGURES
| View All (16)
Abstract
View article
PDF
Threshold-linear networks are a common class of firing rate models that describe recurrent interactions among neurons. Unlike their linear counterparts, these networks generically possess multiple stable fixed points (steady states), making them viable candidates for memory encoding and retrieval. In this work, we characterize stable fixed points of general threshold-linear networks with constant external drive and discover constraints on the coexistence of fixed points involving different subsets of active neurons. In the case of symmetric networks, we prove the following antichain property: if a set of neurons is the support of a stable fixed point, then no proper subset or superset of can support a stable fixed point. Symmetric threshold-linear networks thus appear to be well suited for pattern completion, since the dynamics are guaranteed not to get stuck in a subset or superset of a stored pattern. We also show that for any graph G, we can construct a network whose stable fixed points correspond precisely to the maximal cliques of G. As an application, we design network decoders for place field codes and demonstrate their efficacy for error correction and pattern completion. The proofs of our main results build on the theory of permitted sets in threshold-linear networks, including recently developed connections to classical distance geometry.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (7): 1891–1925.
Published: 01 July 2013
FIGURES
| View All (22)
Abstract
View article
PDF
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.