Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Mattias Ohlsson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Neural Networks for Optimization Problems with Inequality Constraints: The Knapsack Problem
UnavailablePublisher: Journals Gateway
Neural Computation (1993) 5 (2): 331–339.
Published: 01 March 1993
Abstract
View articletitled, Neural Networks for Optimization Problems with Inequality Constraints: The Knapsack Problem
View
PDF
for article titled, Neural Networks for Optimization Problems with Inequality Constraints: The Knapsack Problem
A strategy for finding approximate solutions to discrete optimization problems with inequality constraints using mean field neural networks is presented. The constraints x ≤ 0 are encoded by x⊖ ( x ) terms in the energy function. A careful treatment of the mean field approximation for the self-coupling parts of the energy is crucial, and results in an essentially parameter-free algorithm. This methodology is extensively tested on the knapsack problem of size up to 10 3 items. The algorithm scales like NM for problems with N items and M constraints. Comparisons are made with an exact branch and bound algorithm when this is computationally possible ( N ≤ 30). The quality of the neural network solutions consistently lies above 95% of the optimal ones at a significantly lower CPU expense. For the larger problem sizes the algorithm is compared with simulated annealing and a modified linear programming approach. For "nonhomogeneous" problems these produce good solutions, whereas for the more difficult "homogeneous" problems the neural approach is a winner with respect to solution quality and/or CPU time consumption. The approach is of course also applicable to other problems of similar structure, like set covering .