Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Stephen I. Gallant
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (8): 2038–2078.
Published: 01 August 2013
FIGURES
| View All (4)
Abstract
View articletitled, Representing Objects, Relations, and Sequences
View
PDF
for article titled, Representing Objects, Relations, and Sequences
Vector symbolic architectures (VSAs) are high-dimensional vector representations of objects (e.g., words, image parts), relations (e.g., sentence structures), and sequences for use with machine learning algorithms. They consist of a vector addition operator for representing a collection of unordered objects, a binding operator for associating groups of objects, and a methodology for encoding complex structures. We first develop constraints that machine learning imposes on VSAs; for example, similar structures must be represented by similar vectors. The constraints suggest that current VSAs should represent phrases (“ The smart Brazilian girl ”) by binding sums of terms, in addition to simply binding the terms directly. We show that matrix multiplication can be used as the binding operator for a VSA, and that matrix elements can be chosen at random. A consequence for living systems is that binding is mathematically possible without the need to specify, in advance, precise neuron-to-neuron connection properties for large numbers of synapses. A VSA that incorporates these ideas, Matrix Binding of Additive Terms (MBAT), is described that satisfies all constraints. With respect to machine learning, for some types of problems appropriate VSA representations permit us to prove learnability rather than relying on simulations . We also propose dividing machine (and neural) learning and representation into three stages, with differing roles for learning in each stage. For neural modeling, we give representational reasons for nervous systems to have many recurrent connections, as well as for the importance of phrases in language processing. Sizing simulations and analyses suggest that VSAs in general, and MBAT in particular, are ready for real-world applications.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (3): 293–309.
Published: 01 September 1991
Abstract
View articletitled, A Practical Approach for Representing Context and for Performing Word Sense Disambiguation Using Neural Networks
View
PDF
for article titled, A Practical Approach for Representing Context and for Performing Word Sense Disambiguation Using Neural Networks
Representing and manipulating context information is one of the hardest problems in natural language processing. This paper proposes a method for representing some context information so that the correct meaning for a word in a sentence can be selected. The approach is primarily based on work by Waltz and Pollack (1985, 1984), who emphasized neutrally plausible systems. By contrast this paper focuses on computationally feasible methods applicable to full-scale natural language processing systems. There are two key elements: a collection of context vectors defined for every word used by a natural language processing system, and a context algorithm that computes a dynamic context vector at any position in a body of text. Once the dynamic context vector has been computed it is easy to choose among competing meanings for a word. This choice of definitions is essentially a neural network computation, and neural network learning algorithms should be able to improve such choices. Although context vectors do not represent all context information, their use should improve those full-scale systems that have avoided context as being too difficult to deal with. Good candidates for full-scale context vector implementations are machine translation systems and Japanese word processors. A main goal of this paper is to encourage such large-scale implementations and tests of context vector approaches. A variety of interesting directions for research in natural language processing and machine learning will be possible once a full set of context vectors has been created. In particular the development of more powerful context algorithms will be an important topic for future research.