Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
E. Paxon Frady
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2025) 37 (1): 1–37.
Published: 12 December 2024
FIGURES
| View All (10)
Abstract
View article
PDF
We introduce residue hyperdimensional computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using resources that scale only logarithmically with the range, a vast improvement over previous methods. It also exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2023) 35 (7): 1159–1186.
Published: 12 June 2023
FIGURES
| View All (4)
Abstract
View article
PDF
We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate the considered techniques in several settings that involve, for example, inclusion of external noise and storage elements with reduced precision. In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for hyperdimensional computing/vector symbolic architectures) are also well suited for decoding information from the compositional distributed representations. Combining these decoding techniques with interference cancellation ideas from communications improves previously reported bounds (Hersche et al., 2021 ) of the information rate of the distributed representations from 1.20 to 1.40 bits per dimension for smaller codebooks and from 0.60 to 1.26 bits per dimension for larger codebooks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (12): 2311–2331.
Published: 01 December 2020
FIGURES
Abstract
View article
PDF
The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSAs) (Plate, 1991 ; Gayler, 1998 ; Kanerva, 1996 ), whereby data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations. In particular, we propose an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple codevectors. Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. We show in two examples—parsing of a tree-like data structure and parsing of a visual scene—how the factorization problem arises and how the resonator network can solve it. More broadly, resonator networks open the possibility of applying VSAs to myriad artificial intelligence problems in real-world domains. The companion article in this issue (Kent, Frady, Sommer, & Olshausen, 2020 ) presents a rigorous analysis and evaluation of the performance of resonator networks, showing it outperforms alternative approaches.
Journal Articles
Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods
Publisher: Journals Gateway
Neural Computation (2020) 32 (12): 2332–2388.
Published: 01 December 2020
FIGURES
| View All (14)
Abstract
View article
PDF
We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer ( 2020 ), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed by the Hadamard product between a discrete set of high-dimensional vectors, a resonator network can efficiently decompose the composite into these factors. We compare the performance of resonator networks against optimization-based methods, including Alternating Least Squares and several gradient-based algorithms, showing that resonator networks are superior in several important ways. This advantage is achieved by leveraging a combination of nonlinear dynamics and searching in superposition, by which estimates of the correct solution are formed from a weighted superposition of all possible solutions. While the alternative methods also search in superposition, the dynamics of resonator networks allow them to strike a more effective balance between exploring the solution space and exploiting local information to drive the network toward probable solutions. Resonator networks are not guaranteed to converge, but within a particular regime they almost always do. In exchange for relaxing the guarantee of global convergence, resonator networks are dramatically more effective at finding factorizations than all alternative approaches considered.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (6): 1449–1513.
Published: 01 June 2018
FIGURES
| View All (19)
Abstract
View article
PDF
To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.
Journal Articles
Scalable Semisupervised Functional Neurocartography Reveals Canonical Neurons in Behavioral Networks
Publisher: Journals Gateway
Neural Computation (2016) 28 (8): 1453–1497.
Published: 01 August 2016
FIGURES
| View All (10)
Abstract
View article
PDF
Large-scale data collection efforts to map the brain are underway at multiple spatial and temporal scales, but all face fundamental problems posed by high-dimensional data and intersubject variability. Even seemingly simple problems, such as identifying a neuron/brain region across animals/subjects, become exponentially more difficult in high dimensions, such as recognizing dozens of neurons/brain regions simultaneously. We present a framework and tools for functional neurocartography—the large-scale mapping of neural activity during behavioral states. Using a voltage-sensitive dye (VSD), we imaged the multifunctional responses of hundreds of leech neurons during several behaviors to identify and functionally map homologous neurons. We extracted simple features from each of these behaviors and combined them with anatomical features to create a rich medium-dimensional feature space. This enabled us to use machine learning techniques and visualizations to characterize and account for intersubject variability, piece together a canonical atlas of neural activity, and identify two behavioral networks. We identified 39 neurons (18 pairs, 3 unpaired) as part of a canonical swim network and 17 neurons (8 pairs, 1 unpaired) involved in a partially overlapping preparatory network. All neurons in the preparatory network rapidly depolarized at the onsets of each behavior, suggesting that it is part of a dedicated rapid-response network. This network is likely mediated by the S cell, and we referenced VSD recordings to an activity atlas to identify multiple cells of interest simultaneously in real time for further experiments. We targeted and electrophysiologically verified several neurons in the swim network and further showed that the S cell is presynaptic to multiple neurons in the preparatory network. This study illustrates the basic framework to map neural activity in high dimensions with large-scale recordings and how to extract the rich information necessary to perform analyses in light of intersubject variability.