Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Eric Mjolsness
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (6): 1455–1474.
Published: 15 August 1999
Abstract
View article
PDF
The softassign quadratic assignment algorithm is a discrete-time, continuous-state, synchronous updating optimizing neural network. While its effectiveness has been shown in the traveling salesman problem, graph matching, and graph partitioning in thousands of simulations, its convergence properties have not been studied. Here, we construct discrete-time Lyapunov functions for the cases of exact and approximate doubly stochastic constraint satisfaction, which show convergence to a fixed point. The combination of good convergence properties and experimental success makes the softassign algorithm an excellent choice for neural quadratic assignment optimization.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (5): 1041–1060.
Published: 01 July 1996
Abstract
View article
PDF
We present a novel optimizing network architecture with applications in vision, learning, pattern recognition, and combinatorial optimization. This architecture is constructed by combining the following techniques: (1) deterministic annealing, (2) self-amplification, (3) algebraic transformations, (4) clocked objectives, and (5) softassign. Deterministic annealing in conjunction with self-amplification avoids poor local minima and ensures that a vertex of the hypercube is reached. Algebraic transformations and clocked objectives help partition the relaxation into distinct phases. The problems considered have doubly stochastic matrix constraints or minor variations thereof. We introduce a new technique, softassign, which is used to satisfy this constraint. Experimental results on different problems are presented and discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (4): 787–804.
Published: 01 May 1996
Abstract
View article
PDF
Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point-matching and graph-matching distance measures. The point-matching distance measure is approximately invariant under affine transformations—translation, rotation, scale, and shear—and permutations. It operates between noisy images with missing and spurious points. The graph-matching distance measure operates on weighted graphs and is invariant under permutations. Learning is formulated as an optimization problem. Large objectives so formulated (∼ million variables) are efficiently minimized using a combination of optimization techniques—softassign, algebraic transformations, clocked objectives, and deterministic annealing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (2): 218–229.
Published: 01 June 1989
Abstract
View article
PDF
We introduce an optimization approach for solving problems in computer vision that involve multiple levels of abstraction. Our objective functions include compositional and specialization hierarchies. We cast vision problems as inexact graph matching problems, formulate graph matching in terms of constrained optimization, and use analog neural networks to perform the optimization. The method is applicable to perceptual grouping and model matching. Preliminary experimental results are shown.