Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Richard W. Prager
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (1): 161–180.
Published: 01 January 1994
Abstract
View article
PDF
The often disappointing performance of optimizing neural networks can be partly attributed to the rather ad hoc manner in which problems are mapped onto them for solution. In this paper a rigorous mapping is described for quadratic 0-1 programming problems with linear equality and inequality constraints, this being the most general class of problem such networks can solve. The problem's constraints define a polyhedron P containing all the valid solution points, and the mapping guarantees strict confinement of the network's state vector to P . However, forcing convergence to a 0-1 point within P is shown to be generally intractable, rendering the Hopfield and similar models inapplicable to the vast majority of problems. A modification of the tabu learning technique is presented as a more coherent approach to general problem solving with neural networks. When tested on a collection of knapsack problems, the modified dynamics produced some very encouraging results.